Thursday, February 24, 2011

Atomic Antennae Transmit Quantum Information Across a Microchip

The researchers have published their work in the scientific journalNature.

Six years ago scientists at the University of Innsbruck realized the first quantum byte -- a quantum computer with eight entangled quantum particles; a record that still stands."Nevertheless, to make practical use of a quantum computer that performs calculations, we need a lot more quantum bits," says Prof. Rainer Blatt, who, with his research team at the Institute for Experimental Physics, created the first quantum byte in an electromagnetic ion trap."In these traps we cannot string together large numbers of ions and control them simultaneously." To solve this problem, the scientists have started to design a quantum computer based on a system of many small registers, which have to be linked. To achieve this, Innsbruck quantum physicists have now developed a revolutionary approach based on a concept formulated by theoretical physicists Ignacio Cirac and Peter Zoller. In their experiment, the physicists electromagnetically coupled two groups of ions over a distance of about 50 micrometers. Here, the motion of the particles serves as an antenna."The particles oscillate like electrons in the poles of a TV antenna and thereby generate an electromagnetic field," explains Blatt."If one antenna is tuned to the other one, the receiving end picks up the signal of the sender, which results in coupling." The energy exchange taking place in this process could be the basis for fundamental computing operations of a quantum computer.

Antennae amplify transmission

"We implemented this new concept in a very simple way," explains Rainer Blatt. In a miniaturized ion trap a double-well potential was created, trapping the calcium ions. The two wells were separated by 54 micrometers."By applying a voltage to the electrodes of the ion trap, we were able to match the oscillation frequencies of the ions," says Blatt.

"This resulted in a coupling process and an energy exchange, which can be used to transmit quantum information." A direct coupling of two mechanical oscillations at the quantum level has never been demonstrated before. In addition, the scientists show that the coupling is amplified by using more ions in each well."These additional ions function as antennae and increase the distance and speed of the transmission," says Rainer Blatt, who is excited about the new concept. This work constitutes a promising approach for building a fully functioning quantum computer.

"The new technology offers the possibility to distribute entanglement. At the same time, we are able to target each memory cell individually," explains Rainer Blatt. The new quantum computer could be based on a chip with many micro traps, where ions communicate with each other through electromagnetic coupling. This new approach represents an important step towards practical quantum technologies for information processing.

The quantum researchers are supported by the Austrian Science Fund FWF, the European Union, the European Research Council and the Federation of Austrian Industries Tyrol.


Source

Tuesday, February 22, 2011

'Fingerprints' Match Molecular Simulations With Reality

ORNL's Jeremy Smith collaborated on devising a method -- dynamical fingerprints --that reconciles the different signals between experiments and computer simulations to strengthen analyses of molecules in motion. The research will be published in theProceedings of the National Academy of Sciences.

"Experiments tend to produce relatively simple and smooth-looking signals, as they only 'see' a molecule's motions at low resolution," said Smith, who directs ORNL's Center for Molecular Biophysics and holds a Governor's Chair at the University of Tennessee."In contrast, data from a supercomputer simulation are complex and difficult to analyze, as the atoms move around in the simulation in a multitude of jumps, wiggles and jiggles. How to reconcile these different views of the same phenomenon has been a long-standing problem."

The new method solves the problem by calculating peaks within the simulated and experimental data, creating distinct"dynamical fingerprints." The technique, conceived by Smith's former graduate student Frank Noe, now at the Free University of Berlin, can then link the two datasets.

Supercomputer simulations and modeling capabilities can add a layer of complexity missing from many types of molecular experiments.

"When we started the research, we had hoped to find a way to use computer simulation to tell us which molecular motions the experiment actually sees," Smith said."When we were finished we got much more -- a method that could also tell us which other experiments should be done to see all the other motions present in the simulation. This method should allow major facilities like the ORNL's Spallation Neutron Source to be used more efficiently."

Combining the power of simulations and experiments will help researchers tackle scientific challenges in areas like biofuels, drug development, materials design and fundamental biological processes, which require a thorough understanding of how molecules move and interact.

"Many important things in science depend on atoms and molecules moving," Smith said."We want to create movies of molecules in motion and check experimentally if these motions are actually happening."

"The aim is to seamlessly integrate supercomputing with the Spallation Neutron Source so as to make full use of the major facilities we have here at ORNL for bioenergy and materials science development," Smith said.

The collaborative work included researchers from L'Aquila, Italy, Wuerzburg and Bielefeld, Germany, and the University of California at Berkeley. The research was funded in part by a Scientific Discovery through Advanced Computing grant from the DOE Office of Science.


Source

Sunday, February 20, 2011

Brain-Machine Interfaces Make Gains by Learning About Their Users, Letting Them Rest, and Allowing for Multitasking

In a typical brain-computer interface (BCI) set-up, users can send one of three commands -- left, right, or no-command. No-command is the static state between left and right and is necessary for a brain-powered wheelchair to continue going straight, for example, or to stay put in front of a specific target. But it turns out that no-command is very taxing to maintain and requires extreme concentration. After about an hour, most users are spent. Not much help if you need to maneuver that wheelchair through an airport.

In an ongoing study demonstrated by Millán and doctoral student Michele Tavella at the AAAS 2011 Annual Meeting in Washington, D.C., the scientists hook volunteers up to BCI and ask them to read, speak, or read aloud while delivering as many left and right commands as possible or delivering a no-command. By using statistical analysis programmed by the scientists, Millán's BCI can distinguish between left and right commands and learn when each subject is sending one of these versus a no-command. In other words, the machine learns to read the subject's mental intention. The result is that users can mentally relax and also execute secondary tasks while controlling the BCI.

The so-called Shared Control approach to facilitating human-robot interactions employs image sensors and image-processing to avoid obstacles. According to Millán, however, Shared Control isn't enough to let an operator to rest or concentrate on more than one command at once, limiting long-term use.

Millán's new work complements research on Shared Control and makes multitasking a reality while at the same time allows users to catch a break. His trick is in decoding the signals coming from EEG readings on the scalp -- readings that represent the activity of millions of neurons and have notoriously low resolution. By incorporating statistical analysis, or probability theory, his BCI allows for both targeted control -- maneuvering around an obstacle -- and more precise tasks, such as staying on a target. It also makes it easier to give simple commands like"go straight" that need to be executed over longer periods of time (think back to that airport) without having to focus on giving the same command over and over again.

It will be a while before this cutting-edge technology makes the move from lab to production line, but Millán's prototypes are the first working models of their kind to use probability theory to make BCIs easier to use over time. His next step is to combine this new level of sophistication with Shared Control in an ongoing effort to take BCI to the next level, necessary for widespread use. Further advancements, such as finer grained interpretation of cognitive information, are being developed in collaboration with the European project for Tools for Brain Computer (www.tobi.com). The multinational project is headed by Professor Millán and has moved into the clinical testing phase for several BCIs.


Source

Saturday, February 19, 2011

Augmented Reality System for Learning Chess

An ordinary webcam, a chess board, a set of 32 pieces and custom software are the key elements in the final degree project of the telecommunications engineering students Ivan Paquico and Cristina Palmero, from the UPC-Barcelona Tech's Terrassa School of Engineering (EET). The project, for which the students were awarded a distinction, was directed by the professor Jordi Voltas and completed during an international mobility placement in Finland.

The system created by Ivan Paquico, the 2001 Spanish Internet chess champion, and Cristina Palmero, a keen player and federation member, is a didactic tool that will help chess clubs and associations to teach the game and make it more appealing, particularly to younger players.

The system combines augmented reality, computer vision and artificial intelligence, and the only equipment required is a high-definition home webcam, the Augmented Reality Chess software, a standard board and pieces, and a set of cardboard markers the same size as the squares on the board, each marked with the first letter of the corresponding piece: R for the king (reiin Catalan), D for the queen (dama), T for the rooks (torres), A for the bishops (alfils), C for the knights (cavalls) and P for the pawns (peons).

Learning chess with virtual pieces

To use the system, learners play with an ordinary chess board but move the cardboard markers instead of standard pieces. The table is lit from above and the webcam focuses on the board, and every time the player moves one of the markers the system recognises the piece and reproduces the move in 3D on the computer screen, creating a virtual representation of the game.

For example, if the learner moves the marker P (pawn), the corresponding piece will be displayed on the screen in 3D, with all of the possible moves indicated. This is a simple and attractive way of showing novices the permitted movements of each piece, making the system particularly suitable for children learning the basics of this board game.

Making chess accessible to all

The learning tool also incorporates a move-tracking program called Chess Recognition: from the images captured by the webcam, the system instantly recognises and analyses every movement of every piece and can act as a referee, identify illegal moves and provide the players with an audible description of the game status. According to Ivan Paquico and Cristina Palmero, this feature could be very useful for players with visual impairment -- who have their own federation and, until now, have had to play with specially adapted boards and pieces -- and for clubs and federations, tournament organisers and enthusiasts of all levels.

The Chess Recognition program saves whole games so that they can be shared, broadcast online and viewed on demand, and can generate a complete user history for analysing the evolution of a player's game. The program also creates an automatic copy of the scoresheet (the official record of each game) for players to view or print.

The technology for playing chess and recording games online has been available for a number of years, but until now players needed sophisticated equipment including pieces with integrated chips and a special electronic board with a USB connection. The standard retail cost of this equipment is between 400 and 500 euros.


Source

Friday, February 18, 2011

Scientists Steer Car With the Power of Thought

They then succeeded in developing an interface to connect the sensors to their otherwise purely computer-controlled vehicle, so that it can now be"controlled" via thoughts. Driving by thought control was tested on the site of the former Tempelhof Airport.

The scientists from Freie Universität first used the sensors for measuring brain waves in such a way that a person can move a virtual cube in different directions with the power of his or her thoughts. The test subject thinks of four situations that are associated with driving, for example,"turn left" or"accelerate." In this way the person trained the computer to interpret bioelectrical wave patterns emitted from his or her brain and to link them to a command that could later be used to control the car. The computer scientists connected the measuring device with the steering, accelerator, and brakes of a computer-controlled vehicle, which made it possible for the subject to influence the movement of the car just using his or her thoughts.

"In our test runs, a driver equipped with EEG sensors was able to control the car with no problem -- there was only a slight delay between the envisaged commands and the response of the car," said Prof. Raúl Rojas, who heads the AutoNOMOS project at Freie Universität Berlin. In a second test version, the car drove largely automatically, but via the EEG sensors the driver was able to determine the direction at intersections.

The AutoNOMOS Project at Freie Universität Berlin is studying the technology for the autonomous vehicles of the future. With the EEG experiments they investigate hybrid control approaches, i.e., those in which people work with machines.

The computer scientists have made a short film about their research, which is available at:http://tinyurl.com/BrainDriver


Source

Thursday, February 17, 2011

Controlling a Computer With Thoughts?

The projects build upon ongoing research conducted in epilepsy patients who had the interfaces temporarily placed on their brains and were able to move cursors and play computer games, as well as in monkeys that through interfaces guided a robotic arm to feed themselves marshmallows and turn a doorknob.

"We are now ready to begin testing BCI technology in the patients who might benefit from it the most, namely those who have lost the ability to move their upper limbs due to a spinal cord injury," said Michael L. Boninger, M.D., director, UPMC Rehabilitation Institute, chair, Department of Physical Medicine and Rehabilitation, Pitt School of Medicine, and a senior scientist on both projects."It's particularly exciting for us to be able to test two types of interfaces within the brain."

"By expanding our research from the laboratory to clinical settings, we hope to gain a better understanding of how to train and motivate patients who will benefit from BCI technology," said Elizabeth Tyler-Kabara, M.D., Ph.D., a UPMC neurosurgeon and assistant professor of neurological surgery and bioengineering, Pitt Schools of Medicine and Engineering, and the lead surgeon on both projects.

In one project, funded by an$800,000 grant from the National Institutes of Health, a BCI based on electrocorticography (ECoG) will be placed on the motor cortex surface of a spinal cord injury patient's brain for up to 29 days. The neural activity picked up by the BCI will be translated through a computer processor, allowing the patient to learn to control computer cursors, virtual hands, computer games and assistive devices such as a prosthetic hand or a wheelchair.

The second project, funded by the Defense Advanced Research Projects Agency (DARPA) for up to$6 million over three years, is part of a program led by the Johns Hopkins University Applied Physics Laboratory (APL), Laurel, Md. It will further develop technology tested in monkeys by Andrew Schwartz, Ph.D., professor of neurobiology, Pitt School of Medicine, and also a senior investigator on both projects.

It uses an interface that is a tiny, 10-by-10 array of electrodes that is implanted on the surface of the brain to read activity from individual neurons. Those signals will be processed and relayed to maneuver a sophisticated prosthetic arm.

"Our animal studies have shown that we can interpret the messages the brain sends to make a simple robotic arm reach for an object and turn a mechanical wrist," Dr. Schwartz said."The next step is to see not only if we can make these techniques work for people, but also if we can make the movements more complex."

In the study, which is expected to begin by late 2011, participants will get two separate electrodes. In future research efforts, the technology may be enhanced with an innovative telemetry system that would allow wireless control of a prosthetic arm, as well as a sensory component.

"Our ultimate aim is to develop technologies that can give patients with physical disabilities control of assistive devices that will help restore their independence," Dr. Boninger said.


Source

Wednesday, February 16, 2011

Reconfigurable Supercomputing Outperforms Rivals in Important Science Applications

In November, the TOP500 list of the world's most powerful supercomputers, for the first time ever, named the Chinese Tianhe-1A system at the National Computer Center in Tainjin, China as No. 1.

In his state of the union speech, President Barack Obama noted,"Just recently, China became home of the world's largest solar research facility, and the world's fastest computer."

But that list does not include reconfigurable supercomputers such as Novo-G, built and developed at the University of Florida, said Alan George, professor of electrical and computer engineering, and director of the National Science Foundation's Center for High-Performance Reconfigurable Computing, known as CHREC.

"Novo-G is believed to be the most powerful reconfigurable machine on the planet and, for some applications, it is the most powerful computer of any kind on the planet," George said.

"It is very difficult to accurately rank supercomputers because it depends upon what you want them to do," George said, adding that the TOP500 list ranks supercomputers by their performance on a few basic routines in linear algebra using 64-bit, floating-point arithmetic.

However, a significant number of the most important applications in the world do not adhere to that standard, including a growing list of vital applications in health and life sciences, signal and image processing, financial science, and more under study with Novo-G at Florida.

Most of the world's computers, from smart-phones to laptops to Tianhe-1A, feature microprocessors with fixed-logic hardware structures. All software applications for these systems must conform to these fixed structures, which can lead to a significant loss in speed and increase in energy consumption.

By contrast, with reconfigurable machines, a relatively new and highly innovative form of computing, the architecture can adapt to match the unique needs of each application, which can lead to much faster speed and less wasted energy due to adaptive hardware customization.

Novo-G uses 192 reconfigurable processors and"can rival the speed of the world's largest supercomputers at a tiny fraction of their cost, size, power, and cooling," the researchers noted in a new article on Novo-G published in the January-February edition ofIEEE Computing in Science and Engineeringmagazine.

Conventional supercomputers, some the size of a large building, can consume up to millions of watts of electrical power, generating massive amounts of heat, whereas Novo-G is about the size of two home refrigerators and consumes less than 8,000 watts.

Later this year, researchers will double the reconfigurable capacity of Novo-G, an upgrade only requiring a modest increase in size, power, and cooling, unlike upgrades with conventional supercomputers.

In their article, the researchers discuss Novo-G and its obvious advantages for use in certain applications such as genome research, cancer diagnosis, plant science, and the ability to analyze large data sets.

Herman Lam, an electrical and computer engineering professor and co-investigator on Novo-G, said some vital science applications that can take months or years to run on a personal computer can run in minutes or hours on the Novo-G, such as applications for DNA sequence alignment at UF's Interdisciplinary Center for Biotechnology Research.

CHREC includes research sites at four universities including Florida, Brigham Young, George Washington and Virginia Tech. In addition, there are more than 30 partners in CHREC, such as the U.S. Air Force, Army, and Navy, NASA, National Security Agency, Boeing, Honeywell, Lockheed Martin, Monsanto, Northrop Grumman, and the Los Alamos, Oak Ridge and Sandia National Labs.


Source

Tuesday, February 15, 2011

US Secret Service Moves Tiny Town to Virtual Tiny Town: Teaching Secret Service Agents and Officers How to Prepare a Site Security Plan

Now, with help from the Department of Homeland Security (DHS) Science& Technology Directorate (S&T), the Secret Service is giving training scenarios a high-tech edge: moving from static tabletop models to virtual kiosks with gaming technology and 3D modeling.

For the past 40 years, a miniature model environment called"Tiny Town" has been one of the methods used to teach Secret Service agents and officers how to prepare a site security plan. The model includes different sites -- an airport, outdoor stadium, urban rally site and a hotel interior -- and uses scaled models of buildings, cars and security assets. The scenario-based training allows students to illustrate a dignitary's entire itinerary and accommodate unrelated, concurrent activities in a public venue. Various elements of a visit are covered, such as an arrival, rope line or public remarks. The class works as a whole and in small groups to develop and present their security plan.

Enter videogame technology. The Secret Service's James J. Rowley Training Center near Washington, D.C., sought to take these scenarios beyond a static environment to encompass the dynamic threat spectrum that exists today, while taking full advantage of the latest computer software technology.

The agency's Security and Incident Modeling Lab wanted to update Tiny Town and create a more relevant and flexible training tool. With funding from DHS S&T, the Secret Service developed the Site Security Planning Tool (SSPT), a new training system dubbed"Virtual Tiny Town" by instructors, with high-tech features:

  • 3D models and game-based virtual environments
  • Simulated chemical plume dispersion for making and assessing decisions
  • A touch interface to foster collaborative, interactive involvement by student teams
  • A means to devise, configure, and test a security plan that is simple, engaging, and flexible
  • Both third- and first-person viewing perspectives for overhead site evaluation and for a virtual"walk-through" of the site, reflecting how it would be performed in the field.

The new technology consists of three kiosks, each composed of a 55" Perceptive Pixel touch screen with an attached projector and camera, and a computer running Virtual Battle Space (VBS2) as the base simulation game. The kiosks can accommodate a team of up to four students, and each kiosk's synthetic environment, along with the team's crafted site security plan, can be displayed on a large wall-mounted LED 3D TV monitor for conducting class briefings and demonstrating simulated security challenges.

In addition to training new recruits, SSPT can also provide in-service protective details with advanced training on a range of scenarios, including preparation against chemical, biological or radiological attacks, armed assaults, suicide bombers and other threats.

Future enhancements to SSPT will include modeling the resulting health effects and crowd behaviors of a chemical, radiological or biological attack, to better prepare personnel for a more comprehensive array of scenarios and the necessary life-saving actions required to protect dignitaries and the public alike.

The Site Security Planning Tool development is expected to be completed and activated by spring 2011.


Source

Monday, February 14, 2011

Culling Can't Control Deadly Bat Disease, Mathematical Model Shows

White-nose syndrome, which is estimated to have killed over a million bats in a three year period, is probably caused by a newly discovered cold-adapted fungus,Geomyces destructans. The new model examines how WNS is passed from bat to bat and concludes that culling would not work because of the complexity of bat life history and because the fungal pathogen occurs in the caves and mines where the bats live.

"Because the disease is highly virulent, our model results support the hypothesis that transmission occurs in all contact areas," write the paper's authors, Tom Hallam and Gary McCracken, both of the University of Tennessee."Our simulations indicated culling will not control WNS in bats primarily because contact rates are high among colonial bats, contact occurs in multiple arenas, and periodic movement between arenas occurs."

Ground work on the model was initiated in a 2009 modeling workshop on white-nose syndrome held at the National Institute for Mathematical and Biological Synthesis (NIMBioS) in Knoxville, Tennessee. At the interdisciplinary workshop, experts in the fields of bat physiology, fungal ecology, ecotoxicology, and epidemiology discussed ways in which mathematical modeling could be applied to predict and control the spread of WNS.

"NIMBioS' support for the workshop that initiated this project was crucial in helping formulate models that could be useful in looking at white-nose syndrome," Hallam said.

Culling of bats in areas where the disease is present is one of several options that have been considered by state and federal agencies as a way to control the disease. However, a review of management options for controlling WNS in the paper indicates that culling is ineffective for disease control in wild animals and in some cases, can exacerbate the spread.

White-nose syndrome first appeared in a cave in upstate New York in 2006, and has since spread to 14 states and as far north as Canada. Regional extinctions of the most common bat species, the little brown bat, are predicted within two decades due to WNS.

Eating up to two-thirds of their body weight in insects every night, bats help suppress insect populations ultimately reducing crop damage and the quantities of insecticides used on crops. Bats also play an important ecological role in plant pollination and seed dissemination.


Source

Saturday, February 12, 2011

3-D Digital Dinosaur Track Download: A Roadmap for Saving at-Risk Natural History Resources

The SMU researchers used portable laser scanning technology to capture field data of a huge 110 million-year-old Texas dinosaur track and then create to scale an exact 3D facsimile. They share their protocol and findings with the public -- as well as their downloadable 145-megabyte model -- in the online scientific journalPalaeontologia Electronica.

The model duplicates an actual dinosaur footprint fossil that is slowly being destroyed by weathering because it's on permanent outdoor display, says SMU paleontologist Thomas L. Adams, lead author of the scientific article. The researchers describe in the paper how they created the digital model and discuss the implications for digital archiving and preservation. Click here for the download link.

"This paper demonstrates the feasibility of using portable 3D laser scanners to capture field data and create high-resolution, interactive 3D models of at-risk natural history resources," write the authors.

"3D digitizing technology provides a high-fidelity, low-cost means of producing facsimiles that can be used in a variety of ways," they say, adding that the data can be stored in online museums for distribution to researchers, educators and the public.

SMU paleontologist Louis L. Jacobs is one of the coauthors on the article.

"The protocol for distance scanning presented in this paper is a roadmap for establishing a virtual museum of fossil specimens from inaccessible corners across the globe," Jacobs said.

Paleontologists propose the term"digitype" for digital models

Scientists increasingly are using computed tomography and 3D laser scanners to produce high-quality 3D digital models, say Adams and his colleagues, including to capture high-resolution images from remote field sites.

SMU's full-resolution, three-dimensional digital model of the 24-by-16-inch Texas footprint is one of the first to archive an at-risk fossil, they say.

The SMU paleontologists propose the term"digitype" for such facsimiles, writing in their article"High Resolution Three-Dimensional Laser-scanning of the type specimen ofEubrontes (?) GlenrosensisShuler, 1935, from the Comanchean (Lower Cretaeous) of Texas: Implications for digital archiving and preservation."

Laser scanning is superior to other methods commonly used to create a model because the procedure is noninvasive and doesn't harm the original fossil, the authors say. Traditional molding and casting procedures, such as rubber or silicon molds, can damage specimens.

But the paleontologists call for development of standard formats to help ensure data accessibility.

"Currently there is no single 3D format that is universally portable and accepted by all software manufacturers and researchers," the authors write.

Digitype is baseline for measuring future deterioration

SMU's digital model archives a fossil that is significant within the scientific world as a type specimen -- one in which the original fossil description is used to identify future specimens. The fossil also has cultural importance in Texas. The track is a favorite from well-known fossil-rich Dinosaur Valley State Park, where the iconic footprint draws tourists.

The footprint was left by a large three-toed, bipedal, meat-eating dinosaur, most likely the theropodAcrocanthosaurus. The dinosaur probably left the footprint as it walked the shoreline of an ancient shallow sea that once immersed Texas, Adams said. The track was described and named in 1935 asEubrontes (?) glenrosensis. Tracks are named separately from the dinosaur thought to have made them, he explained.

"Since we can't say with absolute certainty they were made by a specific dinosaur, footprints are considered unique fossils and given their own scientific name," Adams said.

The fossilized footprint, preserved in limestone, was dug up in the 1930s from the bed of the Paluxy River in north central Texas about an hour's drive southwest of Dallas. In 1933 it was put on prominent permanent display in Glen Rose, Texas, embedded in the stone base of a community bandstand on the courthouse square.

The footprint already shows visible damage from erosion, and eventually it will be destroyed by gravity and exposure to the elements, Adams said. The 3D model provides a baseline from which to measure future deterioration, he said.

In comparing the 3D model to an original 1930s photograph made of the footprint, the researchers discovered that some surface areas have fractured and fallen away. By comparing the 3D model with a synthetically altered version, the researchers were able to calculate volume change, which in turn enables reconstruction of lost volume for restoration purposes.

Model comprises 52 scans totaling 2 gigabytes

Adams and his research colleagues took a portable scanner to the bandstand site to capture the 3D images. They employed a NextEngine HD Desktop 3D scanner and ScanStudio HD PRO software running on a standard Windows XP 32 laptop. The scanner and laptop were powered from outlets on the bandstand. The researchers used a tent to control lighting and maximize laser contrast.

Because of the footprint's size -- about 2 feet by 1.4 feet (64 centimeters by 43 centimeters) -- multiple overlapping images were required to capture the full footprint.

Raw scans were imported into Rapidform XOR2 Redesign to align and merge them into a single 3D model. The final 3D model was derived from 52 overlapping scans totaling 2 gigabytes, the authors said.

The full-resolution 3D digital model comprises more than 1 million poly-faces and more than 500,000 vertices with a resolution of 1.2 millimeters. It is stored in Wavefront format. In that format the model is about 145 megabytes. The model is free for downloading from a link onPalaeontologia Electronica's web site.

3D digital footprint also available as a QuickTime virtual object

A smaller facsimile is also available from the journal as a QuickTime Virtual Reality object. In that format, users can slide their mouse pointer over the 3D footprint image to drag it to a desired viewing angle, and zoom and pan. Click here for the link to the QuickTime video.

Adams, a doctoral candidate in the Roy M. Huffington Department of Earth Sciences at SMU, describes the SMU researchers' protocol in a video atwww.smuresearch.com, which also carries a link to the journal article and an image slideshow.

Besides the 3D model, included with thePalaeontologia Electronicaarticle is a link to a pdf of the original 1935 scientific article in which SMU geology professor Ellis W. Shuler described and identified the dinosaur that made the track.

Shuler's article, no longer in print, is"Dinosaur Track Mounted in the Band Stand at Glen Rose, Texas," published in Field& Laboratory. The clay molds and plaster casts Shuler made of the bandstand track are now lost, Adams said. Click here for the article.

Besides Adams and Jacobs, other co-authors on the article are paleontologists Christopher Strganac and Michael J. Polcyn in the Roy M. Huffington Department of Earth Sciences at SMU.

The research was funded by the Institute for the Study of Earth and Man at SMU.


Source

Thursday, February 10, 2011

Virtual Laboratory Predicts Train Vibrations

The construction of new rail lines, or the relocation of old ones underground, has increased society's interest over recent years in the vibrations produced by trains, especially among people who live or work near the tracks. Now a study headed by the Polytechnic University of Valencia (UPV) has made it possible to estimate the trajectory of vibrations from the point at which they are generated (wheel-rail contact) through to the ground.

"The model acts as a"virtual train laboratory', meaning that, if the parameters of the train or the track ballast are changed, it is possible to infer the pattern of the resulting vibrations," says Julia Real, professor of transport and railways at the UPV and lead author of the study."This is ideal for testing changes that, if they work, could be put into practice."

If, for example, the data from an AVE high speed train with an aerodynamic nose are entered rather than those from another train with different mechanical characteristics, different vibration patterns are obtained. The same thing happens when comparing a track without any levelling defects with another older one, or if the condition or type of material used beneath the sleepers are changed.

"The results depend to a large extent on the elasticity, density and thickness of the materials, especially the ballast (gravel that the sleepers rest on)," points out Pablo Salvador, another UPV researcher, who is also a co-author of the study.

The scientists created the analytical model using mathematical equations that describe the frequency and number of waves. The details are published in theJournal Mathematical and Computer Modelling.

"It's a fairly robust model that is relatively simple to use, and which makes it possible to determine potential vibration levels in an area following the introduction of a rail line, and also provides input information for a 2D surface vibration propagation system," explains Salvador.

Validation on the Madrid-Barcelona line

The theoretical results have been successfully compared with experimental frequency and vibration duration measurements taken along the Madrid-Barcelona high speed line. This information was provided by the public company Ineco, which is attached to the Ministry of Public Works.

This study is the first of a series of three, which will use the same methodology to analyse two other rail facilities -- an urban tram line (line 1 of the FGV in Alicante) and a Spanish narrow gauge railway along the Santander-Liérganes line.

"In the first case, which has just been published, we focused on 'the best' (new vehicles and tracks meeting maximum speed and international gauge requirements); in the second case, the urban tram calls for the greatest possible attention with regard to vibrations, requiring extreme care even though they have only small loads and speeds; and the third case will look at a mixed rail operation (passengers and merchandise) making optimum use of resources," explains Julia Real.

"These are three different kinds of rail facility, each having its own structure, requirements and determining factors... but all of which are tremendously efficient with regard to society and the environment," the researcher concludes.


Source

Wednesday, February 9, 2011

Ultrafast Quantum Computer Closer: Ten Billion Bits of Entanglement Achieved in Silicon

The researchers used high magnetic fields and low temperatures to produce entanglement between the electron and the nucleus of an atom of phosphorus embedded in a highly purified silicon crystal. The electron and the nucleus behave as a tiny magnet, or 'spin', each of which can represent a bit of quantum information. Suitably controlled, these spins can interact with each other to be coaxed into an entangled state -- the most basic state that cannot be mimicked by a conventional computer.

An international team from the UK, Japan, Canada and Germany, report their achievement in the journalNature.

'The key to generating entanglement was to first align all the spins by using high magnetic fields and low temperatures,' said Stephanie Simmons of Oxford University's Department of Materials, first author of the report. 'Once this has been achieved, the spins can be made to interact with each other using carefully timed microwave and radiofrequency pulses in order to create the entanglement, and then prove that it has been made.'

The work has important implications for integration with existing technology as it uses dopant atoms in silicon, the foundation of the modern computer chip. The procedure was applied in parallel to a vast number of phosphorus atoms.

'Creating 10 billion entangled pairs in silicon with high fidelity is an important step forward for us,' said co-author Dr John Morton of Oxford University's Department of Materials who led the team. 'We now need to deal with the challenge of coupling these pairs together to build a scalable quantum computer in silicon.'

In recent years quantum entanglement has been recognised as a key ingredient in building new technologies that harness quantum properties. Famously described by Einstein as"spooky action at distance" -- when two objects are entangled it is impossible to describe one without also describing the other and the measurement of one object will reveal information about the other object even if they are separated by thousands of miles.

Creating true entanglement involves crossing the barrier between the ordinary uncertainty encountered in our everyday lives and the strange uncertainties of the quantum world. For example, flipping a coin there is a 50% chance that it comes up heads and 50% tails, but we would never imagine the coin could land with both heads and tails facing upwards simultaneously: a quantum object such as the electron spin can do just that.

Dr Morton said: 'At high temperatures there is simply a 50/50 mixture of spins pointing in different directions but, under the right conditions, all the spins can be made to point in two opposing directions at the same time. Achieving this was critical to the generation of spin entanglement.'


Source

Tuesday, February 8, 2011

Math May Help Calculate Way to Find New Drugs for HIV and Other Diseases

The technique already has identified several potential new drugs that were shown to be effective for fighting strains of HIV by researchers at Johns Hopkins University.

"The power of this is that it's a general method," said Princeton chemical and biological engineering professor Christodoulos Floudas, who led the research team."It has proven successful in finding potential peptides to fight HIV, but it should also be effective in searching for drugs for other diseases."

Floudas, the Stephen C. Macaleer '63 Professor in Engineering and Applied Science, and Princeton engineering doctoral student Meghan Bellows-Peterson collaborated on the study with researchers at the Johns Hopkins University School of Medicine. Their findings were reported in the Nov. 17, 2010, issue ofBiophysical Journal.

The researchers' technique combines concepts from optimization theory, a field of mathematics that focuses on calculating the best option among a number of choices, with those of computational biology, which combines mathematics, statistics and computer science for biology research.

In the case of HIV, the challenge for the Princeton team was to find peptides -- the small chains of biologically active amino acids that are the basic building blocks of proteins -- that could stop the virus from infecting human cells.

"The Princeton researchers have a very sophisticated way of selecting peptides that will fit a particular binding site on an HIV virus," said collaborator Robert Siliciano, a professor of medicine at Johns Hopkins and a 1974 Princeton graduate, who specializes in the treatment of HIV."It narrows the possibilities, and may reduce the amount of time and resources it takes to find new drugs."

Fuzeon (enfuvirtide), is a peptidic drug commonly given to HIV patients for whom first-line HIV medications have not proven fully effective. Fuzeon costs nearly$20,000 per year, and patients must take it regularly due to its short period of effectiveness in the body. The researchers hoped to find an alternative to Fuzeon by discovering new peptides that would be cheaper to produce and allow patients to take fewer and smaller doses.

Fuzeon is thought to inhibit HIV by attaching to the virus and disabling a structure used to penetrate the protective membrane of human cells.

"The actual mechanism for entering cells is still uncertain, but there is a lot of evidence that points to this certain structure on the virus," Bellows-Peterson said."We used the available data on the proteins that form the structure to help us predict what kind of drug might be effective against the virus."

The researchers reasoned that a shorter peptide -- Fuzeon is 36 amino acids long-- would be cheaper to produce and would last longer in the body, since shorter molecules are less susceptible to breakdown. Such formulations also might allow for drugs that could be taken as a pill instead of an injection.

The researchers' biological sleuthing focused on the physical relationship between peptides and the HIV protein structure that Fuzeon targets. The team developed a formula based on statistical thermodynamics to predict whether a given peptide, based on its sequence of amino acids, was likely to bind with the protein that HIV uses for penetrating cells.

This tendency to bind stems from the peptide's free energy state, a physical property related to its shape, which would change if it attached to the HIV protein. The researchers looked for peptides that would shift to a lower energy state after binding to the HIV protein, because these would be more likely to bind to the protein and thus be capable of blocking the virus from entering a cell.

Out of millions of possible peptides, the Princeton researchers used their formula to narrow their search to five promising drug candidates, each 12 amino acids long, one-third the length of Fuzeon. Their collaborators at Johns Hopkins then tested whether the peptides were truly effective at preventing HIV from entering human cells.

The Johns Hopkins scientists found that four of the five designed peptides inhibited HIV and that one of the peptides was particularly potent, even against strains of HIV that are resistant to treatment with Fuzeon. They also found that peptides designed by the Princeton researchers were nontoxic to cells.

"One could never test all the possible peptides to see if they are effective against HIV," Floudas said."But this model was able to sort through millions of possibilities and identify just a few that show promise."

Now that they have identified possible candidates, the researchers plan to experiment with modifying the shape of the peptides to see if they can be made even more effective against the virus. They also hope to expand the use of the model to other diseases, particularly cancers.

"It's an approach to finding peptide-based drugs that target certain proteins, whether those of a virus or those of a cancerous cell," Floudas said.

In addition to Siliciano, collaborators from Johns Hopkins included Lin Shen, a former doctoral student; Philip Cole, a professor of pharmacology; and Martin Taylor, an M.D./Ph.D. candidate who graduated from Princeton in 2005. Hoki Fung, a former Princeton doctoral student who is currently serving as a postdoctoral fellow atÉcole polytechnique fédérale de Lausanne in Switzerland, also participated in the research.

The research was supported by the National Science Foundation.


Source

Friday, February 4, 2011

Unlocking the Secrets of DNA

Neutron scattering gives information about the correlation between base pairs during denaturation, which is not possible using other techniques. This is used to measure the characteristic size of the denatured regions as the temperature is changed, and these sizes can be compared with those predicted by the theoretical model.

The Peyrard-Bishop-Dauxois (PBD) model predicted that fibre DNA denaturation due to temperature would happen in patches along the molecule, rather than 'unzipping' from one end to another. This experiment, the first to investigate the model, strongly supported the model's predications for the first part of the transition, as the molecule is heated. The experiment could only measure the first stage because when the strands become 50% denatured they are too floppy to remain ordered and the fibre structure is no longer stable -- the DNA sample literally falls to pieces.

"This is an important verification of the validity of model and the associated theory, so it can be applied with more confidence to predict the behaviour and properties of DNA," says Andrew Wildes, an instrument scientist at ILL."This will help to understand biological processes such as gene transcription and cell reproduction, and is also a step toward technological applications such as using DNA as nanoscale tweezers or as computer components."

"There's been a lot of research producing good data -- eg nice melting curves -- about the transition point, but these couldn't tell us how it was happening. For example at 50% melted are half the DNA molecules totally denatured and the other half still firmly joined? Or are the strands of each molecule partially separated? Neutron scattering has enabled us to get structural information on the melting process to answer this kind of question," says Michel Peyrard Professor of Physics at Ecole Normale Supérieure de Lyon, and co-developer of the PBD model."As well as implications for technological development it could also help biological applications, such as predicting where genes might be located on long stretches of DNA sequences."

The experiment follows from the pioneering work of Rosalind Franklin, who showed that x-ray scattering from DNA fibres would give structural information. Based on her work, James Watson and Francis Crick deduced the well-known double helix structure of DNA in 1953. DNA is a dynamic molecule that undergoes large structural changes during normal biological processes. For example, DNA inside the cell nucleus is usually 'bundled up' into chromosomes, but when the genetic information is being copied it must be unravelled and the strands separated to allow the code to be read.


Source

Thursday, February 3, 2011

New Mathematical Model of Information Processing in the Brain Accurately Predicts Some of the Peculiarities of Human Vision

At the Society of Photo-Optical Instrumentation Engineers' Human Vision and Electronic Imaging conference on Jan. 27, Ruth Rosenholtz, a principal research scientist in the Department of Brain and Cognitive Sciences, presented a new mathematical model of how the brain does that summarizing. The model accurately predicts the visual system's failure on certain types of image-processing tasks, a good indication that it captures some aspect of human cognition.

Most models of human object recognition assume that the first thing the brain does with a retinal image is identify edges -- boundaries between regions with different light-reflective properties -- and sort them according to alignment: horizontal, vertical and diagonal. Then, the story goes, the brain starts assembling these features into primitive shapes, registering, for instance, that in some part of the visual field, a horizontal feature appears above a vertical feature, or two diagonals cross each other. From these primitive shapes, it builds up more complex shapes -- four L's with different orientations, for instance, would make a square -- and so on, until it's constructed shapes that it can identify as features of known objects.

While this might be a good model of what happens at the center of the visual field, Rosenholtz argues, it's probably less applicable to the periphery, where human object discrimination is notoriously weak. In a series of papers in the last few years, Rosenholtz has proposed that cognitive scientists instead think of the brain as collecting statistics on the features in different patches of the visual field.

Patchy impressions

On Rosenholtz's model, the patches described by the statistics get larger the farther they are from the center. This corresponds with a loss of information, in the same sense that, say, the average income for a city is less informative than the average income for every household in the city. At the center of the visual field, the patches might be so small that the statistics amount to the same thing as descriptions of individual features: A 100-percent concentration of horizontal features could indicate a single horizontal feature. So Rosenholtz's model would converge with the standard model.

But at the edges of the visual field, the models come apart. A large patch whose statistics are, say, 50 percent horizontal features and 50 percent vertical could contain an array of a dozen plus signs, or an assortment of vertical and horizontal lines, or a grid of boxes.

In fact, Rosenholtz's model includes statistics on much more than just orientation of features: There are also measures of things like feature size, brightness and color, and averages of other features -- about 1,000 numbers in all. But in computer simulations, storing even 1,000 statistics for every patch of the visual field requires only one-90th as many virtual neurons as storing visual features themselves, suggesting that statistical summary could be the type of space-saving technique the brain would want to exploit.

Rosenholtz's model grew out of her investigation of a phenomenon called visual crowding. If you were to concentrate your gaze on a point at the center of a mostly blank sheet of paper, you might be able to identify a solitary A at the left edge of the page. But you would fail to identify an identical A at the right edge, the same distance from the center, if instead of standing on its own it were in the center of the word"BOARD."

Rosenholtz's approach explains this disparity: The statistics of the lone A are specific enough to A's that the brain can infer the letter's shape; but the statistics of the corresponding patch on the other side of the visual field also factor in the features of the B, O, R and D, resulting in aggregate values that don't identify any of the letters clearly.

Road test

Rosenholtz's group has also conducted a series of experiments with human subjects designed to test the validity of the model. Subjects might, for instance, be asked to search for a target object -- like the letter O -- amid a sea of"distractors" -- say, a jumble of other letters. A patch of the visual field that contains 11 Q's and one O would have very similar statistics to one that contains a dozen Q's. But it would have much different statistics than a patch that contained a dozen plus signs. In experiments, the degree of difference between the statistics of different patches is an extremely good predictor of how quickly subjects can find a target object: It's much easier to find an O among plus signs than it is to find it amid Q's.

Rosenholtz, who has a joint appointment to the Computer Science and Artificial Intelligence Laboratory, is also interested in the implications of her work for data visualization, an active research area in its own right. For instance, designing subway maps with an eye to maximizing the differences between the summary statistics of different regions could make them easier for rushing commuters to take in at a glance.

In vision science,"there's long been this notion that somehow what the periphery is for is texture," says Denis Pelli, a professor of psychology and neural science at New York University. Rosenholtz's work, he says,"is turning it into real calculations rather than just a side comment." Pelli points out that the brain probably doesn't track exactly the 1,000-odd statistics that Rosenholtz has used, and indeed, Rosenholtz says that she simply adopted a group of statistics commonly used to describe visual data in computer vision research. But Pelli also adds that visual experiments like the ones that Rosenholtz is performing are the right way to narrow down the list to"the ones that really matter."


Source

Wednesday, February 2, 2011

The Science of Bike-Sharing

While the idea is gaining speed and subscribers at the 400 locations around the world where it has been implemented, there have been growing pains -- partly because the projects have been so successful. About seven percent of the time, users aren't able to return a bike because the station at their journey's destination is full. And sometimes stations experience bike shortages, causing frustration with the system.

To solve the problem, Dr. Tal Raviv and Prof. Michal Tzur of Tel Aviv University's Department of Industrial Engineering are developing a mathematical model to lead to a software solution."These stations are managed imperfectly, based on what the station managers see. They use their best guesses to move bikes to different locations around the city using trucks," explains Dr. Raviv."There is no system for more scientifically managing the availability of bikes, creating dissatisfaction among users in popular parts of the city."

Their research was presented in November 2010 at the INFORMS 2010 annual meeting in Austin, Texas.

Biking with computers

An environmentalist, Dr. Raviv wants to see more cities in America adopt the bike-sharing system. In Paris alone, there are 1,700 pick-up and drop-off stations. In New York, there soon might be double or triple that amount, making the management of bike availability an extremely daunting task.

Dr. Raviv, Prof. Tzur and their students have created a mathematical model to predict which bike stations should be refilled or emptied -- and when that needs to happen. In small towns with 100 stations, mere manpower can suffice, they say. But anything more and it's really just a guessing game. A computer program will be more effective.

The researchers are the first to tackle bike-sharing system management using mathematical models and are currently developing a practical algorithmic solution."Our research involves devising methods and algorithms to solve the routing and scheduling problems of the trucks that move fleets, as well as other operational and design challenges within this system," says Dr. Raviv.

For the built environment

The benefits of bike-sharing programs in any city are plentiful. They cut down traffic congestion and alleviate parking shortages; reduce air pollution and health effects such as asthma and bronchitis; promote fitness; and enable good complementary public transportation by allowing commuters to ride from and to train or bus stations.

Because of the low cost of implementing bike-sharing programs, cities can benefit without significant financial outlay. And in some cities today, bicycles are also the fastest form of transport during rush hour.

The city of Tel Aviv is now in the process of deploying a bike sharing system to ease transport around the city, and improve the quality of life for its residents. Tel Aviv University research is contributing to this plan, and the results will be used in a pilot site in Israel.


Source

Tuesday, February 1, 2011

Computer-Assisted Diagnosis Tools to Aid Pathologists

"The advent of digital whole-slide scanners in recent years has spurred a revolution in imaging technology for histopathology," according to Metin N. Gurcan, Ph.D., an associate professor of Biomedical Informatics at The Ohio State University Medical Center."The large multi-gigapixel images produced by these scanners contain a wealth of information potentially useful for computer-assisted disease diagnosis, grading and prognosis."

Follicular Lymphoma (FL) is one of the most common forms of non-Hodgkin Lymphoma occurring in the United States. FL is a cancer of the human lymph system that usually spreads into the blood, bone marrow and, eventually, internal organs.

A World Health Organization pathological grading system is applied to biopsy samples; doctors usually avoid prescribing severe therapies for lower grades, while they usually recommend radiation and chemotherapy regimens for more aggressive grades.

Accurate grading of the pathological samples generally leads to a promising prognosis, but diagnosis depends solely upon a labor-intensive process that can be affected by human factors such as fatigue, reader variation and bias. Pathologists must visually examine and grade the specimens through high-powered microscopes.

Processing and analysis of such high-resolution images, Gurcan points out, remain non-trivial tasks, not just because of the sheer size of the images, but also due to complexities of underlying factors involving differences in staining, illumination, instrumentation and goals. To overcome many of these obstacles to automation, Gurcan and medical center colleagues, Dr. Gerard Lozanski and Dr. Arwa Shana'ah, turned to the Ohio Supercomputer Center.

Ashok Krishnamurthy, Ph.D., interim co-executive director of the center, and Siddharth Samsi, a computational science researcher there and an OSU graduate student in Electrical and Computer Engineering, put the power of a supercomputer behind the process.

"Our group has been developing tools for grading of follicular lymphoma with promising results," said Samsi."We developed a new automated method for detecting lymph follicles using stained tissue by analyzing the morphological and textural features of the images, mimicking the process that a human expert might use to identify follicle regions. Using these results, we developed models to describe tissue histology for classification of FL grades."

Histological grading of FL is based on the number of large malignant cells counted in within tissue samples measuring just 0.159 square millimeters and taken from ten different locations. Based on these findings, FL is assigned to one of three increasing grades of malignancy: Grade I (0-5 cells), Grade II (6-15 cells) and Grade III (more than 15 cells).

"The first step involves identifying potentially malignant regions by combining color and texture features," Samsi explained."The second step applies an iterative watershed algorithm to separate merged regions and the final step involves eliminating false positives."

The large data sizes and complexity of the algorithms led Gurcan and Samsi to leverage the parallel computing resources of OSC's Glenn Cluster in order to reduce the time required to process the images. They used MATLAB® and the Parallel Computing Toolbox™ to achieve significant speed-ups. Speed is the goal of the National Cancer Institute-FUNDED research project, but accuracy is essential. Gurcan and Samsi compared their computer segmentation results with manual segmentation and found an average similarity score of 87.11 percent.

"This algorithm is the first crucial step in a computer-aided grading system for Follicular Lymphoma," Gurcan said."By identifying all the follicles in a digitized image, we can use the entire tissue section for grading of the disease, thus providing experts with another tool that can help improve the accuracy and speed of the diagnosis."


Source