Tuesday, March 22, 2011

Simulating Tomorrow's Accelerators at Near the Speed of Light

But realizing the promise of laser-plasma accelerators crucially depends on being able to simulate their operation in three-dimensional detail. Until now such simulations have challenged or exceeded even the capabilities of supercomputers.

A team of researchers led by Jean-Luc Vay of Berkeley Lab's Accelerator and Fusion Research Division (AFRD) has borrowed a page from Einstein to perfect a revolutionary new method for calculating what happens when a laser pulse plows through a plasma in an accelerator like BELLA. Using their"boosted-frame" method, Vay's team has achieved full 3-D simulations of a BELLA stage in just a few hours of supercomputer time, calculations that would have been beyond the state of the art just two years ago.

Not only are the recent BELLA calculations tens of thousands of times faster than conventional methods, they overcome problems that plagued previous attempts to achieve the full capacity of the boosted-frame method, such as violent numerical instabilities. Vay and his colleagues, Cameron Geddes of AFRD, Estelle Cormier-Michel of the Tech-X Corporation in Denver, and David Grote of Lawrence Livermore National Laboratory, publish their latest findings in the March, 2011 issue of the journalPhysics of Plasma Letters.

Space, time, and complexity

The boosted-frame method, first proposed by Vay in 2007, exploits Einstein's Theory of Special Relativity to overcome difficulties posed by the huge range of space and time scales in many accelerator systems. Vast discrepancies of scale are what made simulating these systems too costly.

"Most researchers assumed that since the laws of physics are invariable, the huge complexity of these systems must also be invariable," says Vay."But what are the appropriate units of complexity? It turns out to depend on how you make the measurements."

Laser-plasma wakefield accelerators are particularly challenging: they send a very short laser pulse through a plasma measuring a few centimeters or more, many orders of magnitude longer than the pulse itself (or the even-shorter wavelength of its light). In its wake, like a speedboat on water, the laser pulse creates waves in the plasma. These alternating waves of positively and negatively charged particles set up intense electric fields. Bunches of free electrons, shorter than the laser pulse,"surf" the waves and are accelerated to high energies.

"The most common way to model a laser-plasma wakefield accelerator in a computer is by representing the electromagnetic fields as values on a grid, and the plasma as particles that interact with the fields," explains Geddes, a member of the BELLA science staff who has long worked on laser-plasma acceleration."Since you have to resolve the finest structures -- the laser wavelength, the electron bunch -- over the relatively enormous length of the plasma, you need a grid with hundreds of millions of cells."

The laser period must also be resolved in time, and calculated over millions of time steps. As a result, while much of the important physics of BELLA is three-dimensional, direct 3-D simulation was initially impractical. Just a one-dimensional simulation of BELLA required 5,000 hours of supercomputer processor time at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC).

Choosing the right frame

The key to reducing complexity and cost lies in choosing the right point of view, or"reference frame." When Albert Einstein was 16 years old he imagined riding along in a frame moving with a beam of light -- a thought experiment that, 10 years later, led to his Theory of Special Relativity, which establishes that there is no privileged reference frame. Observers moving at different velocities may experience space and time differently and even see things happening in a different order, but calculations from any point of view can recover the same physical result.

Among the consequences are that the speed of light in a vacuum is always the same; compared to a stationary observer's experience, time moves more slowly while space contracts for an observer traveling near light speed. These different points of view are called Lorentz frames, and changing one for another is called a Lorentz transformation. The"boosted frame" of the laser pulse is the key to enabling calculations of laser-plasma wakefield accelerators that would otherwise be inaccessible.

A laser pulse pushing through a tenuous plasma moves only a little slower than light through a vacuum. An observer in the stationary laboratory frame sees it as a rapid oscillation of electromagnetic fields moving through a very long plasma, whose simulation requires high resolution and many time steps. But for an observer moving with the pulse, time slows, and the frequency of the oscillations is greatly reduced; meanwhile space contracts, and the plasma becomes much shorter. Thus relatively few time steps are needed to model the interaction between the laser pulse, the plasma waves formed in its wake, and the bunches of electrons riding the wakefield through the plasma. Fewer steps mean less computer time.

Eliminating instability

Early attempts to apply the boosted-frame method to laser-plasma wakefield simulations encountered numerical instabilities that limited how much the calculation frame could be boosted. Calculations could still be speeded up tens or even hundreds of times, but the full promise of the method could not be realized.

Vay's team showed that using a particular boosted frame, that of the wakefield itself -- in which the laser pulse is almost stationary -- realizes near-optimal speedup of the calculation. And it fundamentally modifies the appearance of the laser in the plasma. In the laboratory frame the observer sees many oscillations of the electromagnetic field in the laser pulse; in the frame of the wake, the observer sees just a few at a time.

Not only is speedup possible because of the coarser resolution, but at the same time numerical instabilities due to short wavelengths can be suppressed without affecting the laser pulse. Combined with special techniques for interpreting the data between frames, this allows the full potential of the boosted-frame principle to be reached.

"We produced the first full multidimensional simulation of the 10 billion-electron-volt design for BELLA," says Vay."We even ran simulations all the way up to a trillion electron volts, which establishes our ability to model the behavior of laser-plasma wakefield accelerator stages at varying energies. With this calculation we achieved the theoretical maximum speedup of the boosted-frame method for such systems -- a million times faster than similar calculations in the laboratory frame."

Simulations will still be challenging, especially those needed to tailor applications of high-energy laser-plasma wakefield accelerators to such uses as free-electron lasers for materials and biological sciences, or for homeland security or other research. But the speedup achieves what might otherwise have been virtually impossible: it puts the essential high-resolution simulations within reach of new supercomputers.

This work was supported by the U.S. Department of Energy's Office of Science, including calculations with the WARP beam-simulation code and other applications at the National Energy Research Scientific Computing Center (NERSC).


Source

Thursday, March 10, 2011

Web-Crawling the Brain: 3-D Nanoscale Model of Neural Circuit Created

Researchers in Harvard Medical School's Department of Neurobiology have developed a technique for unraveling these masses. Through a combination of microscopy platforms, researchers can crawl through the individual connections composing a neural network, much as Google crawls Web links.

"The questions that such a technique enables us to address are too numerous even to list," said Clay Reid, HMS professor of neurobiology and senior author on a paper reporting the findings in the March 10 edition ofNature.

The cerebral cortex is arguably the most important part of the mammalian brain. It processes sensory input, reasoning and, some say, even free will. For the past century, researchers have understood the broad outline of cerebral cortex anatomy. In the past decade, imaging technologies have allowed us to see neurons at work within a cortical circuit, to watch the brain process information.

But while these platforms can show us what a circuit does, they don't show us how it operates.

For many years, Reid's lab has been studying the cerebral cortex, adapting ways to hone the detail with which we can view the brain at work. Recently they and others have succeeded in isolating the activities of individual neurons, watching them fire in response to external stimuli.

The ultimate prize, however, would be to get inside a single cortical circuit and probe the architecture of its wiring.

Just one of these circuits, however, contains between 10,000 and 100,000 neurons, each of which makes about 10,000 interconnections, totaling upwards of 1 billion connections -- all within a single circuit."This is a radically hard problem to address," Reid said.

Reid's team, which included Davi Bock, then a graduate student, and postdoctoral researcher Wei-Chung Allen Lee, embarked on a two-part study of the pinpoint-sized region of a mouse brain that is involved in processing vision. They first injected the brain with dyes that flashed whenever specific neurons fired and recorded the firings using a laser-scanning microscope. They then conducted a large anatomy experiment, using electron microscopy to see the same neurons and hundreds of others with nanometer resolution.

Using a new imaging system they developed, the team recorded more than 3 million high-resolution images. They sent them to the Pittsburgh Supercomputing Center at Carnegie Mellon University, where researchers stitched them into 3-D images. Using the resulting images, Bock, Lee and laboratory technician Hyon Suk Kim selected 10 individual neurons and painstakingly traced many of their connections, crawling through the brain's dense thicket to create a partial wiring diagram.

This model also yielded some interesting insights into how the brain functions. Reid's group found that neurons tasked with suppressing brain activity seem to be randomly wired, putting the lid on local groups of neurons all at once rather than picking and choosing. Such findings are important because many neurological conditions, such as epilepsy, are the result of neural inhibition gone awry.

"This is just the iceberg's tip," said Reid."Within ten years I'm convinced we'll be imaging the activity of thousands of neurons in a living brain. In a visual circuit, we'll interpret the data to reconstruct what an animal actually sees. By that time, with the anatomical imaging, we'll also know how it's all wired together."

For now, Reid and his colleagues are working to scale up this platform to generate larger data sets.

"How the brain works is one of the greatest mysteries in nature," Reid added,"and this research presents a new and powerful way for us to explore that mystery."

This research was funded by the Center for Brain Science at Harvard University, Microsoft Research, and the NIH though the National Eye Institute. Researchers report no conflicts of interest.


Source

Wednesday, March 9, 2011

Real March Madness Is Relying on Seedings to Determine Final Four

According to an operations research analysis model developed by Sheldon H. Jacobson, a professor of computer science and the director of the simulation and optimization laboratory at the University of Illinois, you're better off picking a combination of two top-seeded teams, a No. 2 seed and a No. 3 seed.

"There are patterns that exist in the seeds," Jacobson says."As much as we like to believe otherwise, the fact of the matter is that we've uncovered a model that captures this pattern. As a result of that, in spite of what we emotionally feel about teams or who's going to win, the reality is that the numbers trump all of these things," Jacobson said."It's more likely to be 1, 1, 2, 3 in the Final Four than four No. 1's."

Jacobson's model is unique in that it prognosticates not based on who the teams are, but on the seeds they hold. He describes his model in a forthcoming paper in the journal Omega with co-authors Alex Nikolaev, of the University of Buffalo; Adrian Lee, of CITERI (Central Illinois Technology and Education Research Institute); and Douglas King, a graduate student at Illinois.

Jacobson has also integrated the model into a user-friendly website to help March Madness fans determine the relative probability of their chosen team combinations appearing in the final rounds of the NCAA men's basketball tournament.

A number of websites offer assistance to budding bracketologists, such as game-by-game probabilities of certain match-ups or determining the spread on a given team reaching a particular point in the tournament. Jacobson's website is the only one to look at collective groups of seeds within the brackets.

"What we do is use the power of analytics to uncover trends in 'bracketology.' It really is a mathematical science," he said."What our model enables us to do is look at the likelihood or probability that a certain set of seed combinations will occur as we advance deeper into the tournament."

Jacobson's team applied a statistical method called goodness-of-fit testing to NCAA tournament data from 1985 to 2010, identifying patterns in seed distribution in the Elite Eight, Final Four and national championship rounds. They found that the seeds themselves exhibit certain statistical patterns, independent of the team. They then fit the pattern to a stochastic model they can use to assess probabilities and odds.

Two computer science undergraduates, Ammar Rizwan and Emon Dai, built the websitebracketodds.cs.illinois.edubased on Jacobson's model. The publicly accessible website will be up through the entire tournament. Users can evaluate their brackets and also can compare relative likelihood of two sets of seed combinations.

"For each of the rounds that we have available, you could put in what you have so far and even compare it to other possible sets," Rizwan said.

For example, the probability of the Final Four comprising the four top-seeded teams is 0.026, or once every 39 years. Meanwhile, the probability of a Final Four of all No. 16 seeds -- the lowest-seeded teams in the tournament -- is so small that it has a frequency of happening once every eight hundred trillion years. (The Milky Way contains an estimated one hundred billion stars.)

"Basically, if every star was given a year, the years it would take for this to occur is 8,000 times all the stars in the galaxy," Jacobson said."It gives you perspective."

However, sets with long odds do happen. The most unlikely combination in the 26 years studied occurred in 2000, with a Final Four seed combination of 1, 5, 8 and 8. But such a bracket is only predicted to happen once every 32,000 years, so those filling out brackets at home shouldn't hope for a repeat.

What amateur bracketologists can be confident of is upsets. For even the most probable Final Four combination of 1,1,2,3 to occur, two top-seeded schools have to lose.

"In fact, upsets occur with great frequency and great predictability. If you look statistically, there's a certain number of upsets that occur in each round. We just don't know which team they're going to be or when they're going to occur," Jacobson said.

After the 2011 tournament, and in years to come, Jacobson will integrate the new data into the model to continually refine its prediction power. For 2012, Jacobson, Rizwan and Dai hope to integrate a comparative probability feature into the website to allow users to calculate, for example, the probability of a particular set of Final Four seeds if the Elite Eight seeds are given.

Until then, users can find out how likely their picks really are, and compare them against friends' picks -- or even sports commentators'.

"We're not here specifically to say 'Syracuse is going to beat Kentucky in the Elite Eight.' What we're saying is that the seed numbers have patterns," Jacobson said."A 1, 1, 2, 3 is the most likely Final Four. I don't know which two 1's, I don't know which No. 2 and I don't know which No. 3. But I can tell you that if you want to go purely with the odds, choose a Final Four with seeds 1, 1, 2, 3."


Source

Tuesday, March 8, 2011

Extremely Fast Magnetic Random Access Memory (MRAM) Computer Data Storage Within Reach

An invention made by the Physikalisch-Technische Bundesanstalt (PTB) changes this situation: A special chip connection, in association with dynamic triggering of the component, reduces the response from -- so far -- 2 ns to below 500 ps. This corresponds to a data rate of up to 2 GBit (instead of the approx. 400 MBit so far). Power consumption and the thermal load will be reduced, as well as the bit error rate. The European patent is just being granted this spring; the US patent was already granted in 2010. An industrial partner for further development and manufacturing such MRAMs under licence is still being searched for.

Fast computer storage chips like DRAM and SRAM (Dynamic and Static Random Access Memory) which are commonly used today, have one decisive disadvantage: in the case of an interruption of the power supply, the information stored on them is irrevocably lost. The MRAM promises to put an end to this. In the MRAM, the digital information is not stored in the form of an electric charge, but via the magnetic alignment of storage cells (magnetic spins). MRAMs are very universal storage chips because they allow -- in addition to the non-volatile information storage -- also faster access, a high integration density and an unlimited number of writing and reading cycles.

However, the current MRAM models are not yet fast enough to outperform the best competitors. The time for programming a magnetic bit amounts to approx. 2 ns. Whoever wants to speed this up, reaches certain limits which have something to do with the fundamental physical properties of magnetic storage cells: during the programming process, not only the desired storage cell is magnetically excited, but also a large number of other cells. These excitations -- the so-called magnetic ringing -- are only slightly attenuated, their decay can take up to approx. 2 ns, and during this time, no other cell of the MRAM chip can be programmed. As a result, the maximum clock rate of MRAM is, so far, limited to approx. 400 MHz.

Until now, all experiments made to increase the velocity have led to intolerable write errors. Now, PTB scientists have optimized the MRAM design and integrated the so-called ballistic bit triggering which has also been developed at PTB. Here, the magnetic pulses which serve for the programming are selected in such a skilful way that the other cells in the MRAM are hardly magnetically excited at all. The pulse ensures that the magnetization of a cell which is to be switched performs half a precision rotation (180°), while a cell whose storage state is to remain unchanged performs a complete precision rotation (360°). In both cases, the magnetization is in the state of equilibrium after the magnetic pulse has decayed, and magnetic excitations do not occur any more.

This optimal bit triggering also works with ultra-short switching pulses with a duration below 500 ps. The maximum clock rates of the MRAM are, therefore, above 2 GHz. In addition, several bits can be programmed at the same time which would allow the effective write rate per bit to be increased again by more than one order. This invention allows clock rates to be achieved with MRAM which can compete with those of the fastest volatile storage components.


Source

Saturday, March 5, 2011

New Developments in Quantum Computing

At the Association for Computing Machinery's 43rd Symposium on Theory of Computing in June, associate professor of computer science Scott Aaronson and his graduate student Alex Arkhipov will present a paper describing an experiment that, if it worked, would offer strong evidence that quantum computers can do things that classical computers can't. Although building the experimental apparatus would be difficult, it shouldn't be as difficult as building a fully functional quantum computer.

Aaronson and Arkhipov's proposal is a variation on an experiment conducted by physicists at the University of Rochester in 1987, which relied on a device called a beam splitter, which takes an incoming beam of light and splits it into two beams traveling in different directions. The Rochester researchers demonstrated that if two identical light particles -- photons -- reach the beam splitter at exactly the same time, they will both go either right or left; they won't take different paths. It's another quantum behavior of fundamental particles that defies our physical intuitions.

The MIT researchers' experiment would use a larger number of photons, which would pass through a network of beam splitters and eventually strike photon detectors. The number of detectors would be somewhere in the vicinity of the square of the number of photons -- about 36 detectors for six photons, 100 detectors for 10 photons.

For any run of the MIT experiment, it would be impossible to predict how many photons would strike any given detector. But over successive runs, statistical patterns would begin to build up. In the six-photon version of the experiment, for instance, it could turn out that there's an 8 percent chance that photons will strike detectors 1, 3, 5, 7, 9 and 11, a 4 percent chance that they'll strike detectors 2, 4, 6, 8, 10 and 12, and so on, for any conceivable combination of detectors.

Calculating that distribution -- the likelihood of photons striking a given combination of detectors -- is a hard problem. The researchers' experiment doesn't solve it outright, but every successful execution of the experiment does take a sample from the solution set. One of the key findings in Aaronson and Arkhipov's paper is that, not only is calculating the distribution a hard problem, but so is simulating the sampling of it. For an experiment with more than, say, 100 photons, it would probably be beyond the computational capacity of all the computers in the world.

The question, then, is whether the experiment can be successfully executed. The Rochester researchers performed it with two photons, but getting multiple photons to arrive at a whole sequence of beam splitters at exactly the right time is more complicated. Barry Sanders, director of the University of Calgary's Institute for Quantum Information Science, points out that in 1987, when the Rochester researchers performed their initial experiment, they were using lasers mounted on lab tables and getting photons to arrive at the beam splitter simultaneously by sending them down fiber-optic cables of different lengths. But recent years have seen the advent of optical chips, in which all the optical components are etched into a silicon substrate, which makes it much easier to control the photons' trajectories.

The biggest problem, Sanders believes, is generating individual photons at predictable enough intervals to synchronize their arrival at the beam splitters."People have been working on it for a decade, making great things," Sanders says."But getting a train of single photons is still a challenge."

Sanders points out that even if the problem of getting single photons onto the chip is solved, photon detectors still have inefficiencies that could make their measurements inexact: in engineering parlance, there would be noise in the system. But Aaronson says that he and Arkhipov explicitly consider the question of whether simulating even a noisy version of their optical experiment would be an intractably hard problem. Although they were unable to prove that it was, Aaronson says that"most of our paper is devoted to giving evidence that the answer to that is yes." He's hopeful that a proof is forthcoming, whether from his research group or others'.


Source

Friday, March 4, 2011

Human Cues Used to Improve Computer User-Friendliness

"Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist."Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does."

Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is"computer vision." That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to"see" the user and"understand" what the user wants to do?

To some extent, that's already possible. Witness one of Yin's graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user's emotional state. He works with a well-established set of six basic emotions -- anger, disgust, fear, joy, sadness, and surprise -- and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user's mouth provide sufficient clues? What happens if the user's face is only partially visible, perhaps turned to one side?

"Computers only understand zeroes and ones," Yin says."Everything is about patterns. We want to find out how to recognize each emotion using only the most important features."

He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others' emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child's family for use in this type of therapy.

Yin and Gerhardstein's previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

"We want not only to create a virtual-person model, we want to understand a real person's emotions and feelings," Yin says."We want the computer to be able to understand how you feel, too. That's hard, even harder than my other work."

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others -- young children, for instance -- cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth's character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

"This technology," Yin says,"could help us to train the computer to do facial-recognition analysis in place of experts."


Source

Thursday, March 3, 2011

New Interpretation of Antarctic Ice Cores: Prevailing Theory on Climate History Expanded

The new study shows, however, that major portions of the temperature fluctuations can be explained equally well by local climate changes in the southern hemisphere.

The variations in  Earth's orbit and the inclination of Earth have given decisive impetus to the climate changes over the last million years. Serbian mathematician Milutin Milankovitch calculated their influence on the seasonal distribution of insolation back at the beginning of the 20th century and they have been a subject of debate as an astronomic theory of the ice ages since that time. Because land surfaces in particular react sensitively to changes in insolation, whereas the land masses on Earth are unequally distributed, Milankovitch generally felt insolation changes in the northern hemisphere were of outstanding importance for climate change over long periods of time. His considerations became the prevailing working hypothesis in current climate research as numerous climate reconstructions based on ice cores, marine sediments and other climate archives appear to support it.

AWI scientists Thomas Laepple, Gerrit Lohmann and Martin Werner have analyzed again the temperature reconstructions based on ice cores in depth for the now published study. For the first time they took into account that the winter temperature has a greater influence than the summer temperature in the recorded signal in the Antarctic ice cores. If this effect is included in the model calculations, the temperature fluctuations reconstructed from ice cores can also be explained by local climate changes in the southern hemisphere.

Thomas Laepple, who is currently conducting research at Harvard University in the US through a scholarship from the Alexander von Humboldt Foundation, explains the significance of the new findings:"Our results are also interesting because they may lead us out of a scientific dead end." After all, the question of whether and how climate activity in the northern hemisphere is linked to that in the southern hemisphere is one of the most exciting scientific issues in connection with our understanding of climate change. Thus far many researchers have attempted to explain historical Earth climate data from Antarctica on the basis of Milankovitch's classic hypothesis."To date, it hasn't been possible to plausibly substantiate all aspects of this hypothesis, however," states Laepple."Now the game is open again and we can try to gain a better understanding of the long-term physical mechanisms that influence the alternation of ice ages and warm periods."

"Moreover, we were able to show that not only data from ice cores, but also data from marine sediments display similar shifts in certain seasons. That's why there are still plenty of issues to discuss regarding further interpretation of palaeoclimate data," adds Gerrit Lohmann. The AWI physicists emphasize that a combination of high-quality data and models can provide insights into climate change."Knowledge about times in the distant past helps us to understand the dynamics of the climate. Only in this way will we learn how the Earth's climate has changed and how sensitively it reacts to changes."

To avoid misunderstandings, a final point is very important for the AWI scientists. The new study does not call into question that the currently observed climate change has, for the most part, anthropogenic causes. Cyclic changes, as those examined in the Nature publication, take place in phases lasting tens of thousand or hundreds of thousands of years. The drastic emission of anthropogenic climate gases within a few hundred years adds to the natural rise in greenhouse gases after the last ice age and is unique for the last million years. How the climate system, including the complex physical and biological feedbacks, will develop in the long run is the subject of current research at the Alfred Wegener Institute.


Source