Thursday, November 25, 2010

'Racetrack' Magnetic Memory Could Make Computer Memory 100,000 Times Faster

Annoyed by how long it took his computer to boot up, Kläui began to think about an alternative. Hard disks are cheap and can store enormous quantities of data, but they are slow; every time a computer boots up, 2-3 minutes are lost while information is transferred from the hard disk into RAM (random access memory). The global cost in terms of lost productivity and energy consumption runs into the hundreds of millions of dollars a day.

Like the tried and true VHS videocassette, the proposed solution involves data recorded on magnetic tape. But the similarity ends there; in this system the tape would be a nickel-iron nanowire, a million times smaller than the classic tape. And unlike a magnetic videotape, in this system nothing moves mechanically. The bits of information stored in the wire are simply pushed around inside the tape using a spin polarized current, attaining the breakneck speed of several hundred meters per second in the process. It's like reading an entire VHS cassette in less than a second.

In order for the idea to be feasible, each bit of information must be clearly separated from the next so that the data can be read reliably. This is achieved by using domain walls with magnetic vortices to delineate two adjacent bits. To estimate the maximum velocity at which the bits can be moved, Kläui and his colleagues* carried out measurements on vortices and found that the physical mechanism could allow for possible higher access speeds than expected.

Their results were published online October 25, 2010, in the journalPhysical Review Letters. Scientists at the Zurich Research Center of IBM (which is developing a racetrack memory) have confirmed the importance of the results in a Viewpoint article. Millions or even billions of nanowires would be embedded in a chip, providing enormous capacity on a shock-proof platform. A market-ready device could be available in as little as 5-7 years.

Racetrack memory promises to be a real breakthrough in data storage and retrieval. Racetrack-equipped computers would boot up instantly, and their information could be accessed 100,000 times more rapidly than with a traditional hard disk. They would also save energy. RAM needs to be powered every millionth of a second, so an idle computer consumes up to 300 mW just maintaining data in RAM. Because Racetrack memory doesn't have this constraint, energy consumption could be slashed by nearly a factor of 300, to a few mW while the memory is idle. It's an important consideration: computing and electronics currently consumes 6% of worldwide electricity, and is forecast to increase to 15% by 2025.


Source

Wednesday, November 24, 2010

New Standard Proposed for Supercomputing

The rating system, Graph500, tests supercomputers for their skill in analyzing large, graph-based structures that link the huge numbers of data points present in biological, social and security problems, among other areas.

"By creating this test, we hope to influence computer makers to build computers with the architecture to deal with these increasingly complex problems," Sandia researcher Richard Murphy said.

Rob Leland, director of Sandia's Computations, Computers, and Math Center, said,"The thoughtful definition of this new competitive standard is both subtle and important, as it may heavily influence computer architecture for decades to come."

The group isn't trying to compete with Linpack, the current standard test of supercomputer speed, Murphy said."There have been lots of attempts to supplant it, and our philosophy is simply that it doesn't measure performance for the applications we need, so we need another, hopefully complementary, test," he said.

Many scientists view Linpack as a"plain vanilla" test mechanism that tells how fast a computer can perform basic calculations, but has little relationship to the actual problems the machines must solve.

The impetus to achieve a supplemental test code came about at"an exciting dinner conversation at Supercomputing 2009," said Murphy."A core group of us recruited other professional colleagues, and the effort grew into an international steering committee of over 30 people." 

Many large computer makers have indicated interest, said Murphy, adding there's been buy-in from Intel, IBM, AMD, NVIDIA, and Oracle corporations."Whether or not they submit test results remains to be seen, but their representatives are on our steering committee."

Each organization has donated time and expertise of committee members, he said.

While some computer makers and their architects may prefer to ignore a new test for fear their machine will not do well, the hope is that large-scale demand for a more complex test will be a natural outgrowth of the greater complexity of problems.

Studies show that moving data around (not simple computations) will be the dominant energy problem on exascale machines, the next frontier in supercomputing, and the subject of a nascent U.S. Department of Energy initiative to achieve this next level of operations within a decade, Leland said. (Petascale and exascale represent 10 to the 15thand 18thpowers, respectively, operations per second.)

Part of the goal of the Graph500 list is to point out that in addition to more expense in data movement, any shift in application base from physics to large-scale data problems is likely to further increase the application requirements for data movement, because memory and computational capability increase proportionally. That is, an exascale computer requires an exascale memory.

"In short, we're going to have to rethink how we build computers to solve these problems, and the Graph500 is meant as an early stake in the ground for these application requirements," said Murphy.

How does it work?

Large data problems are very different from ordinary physics problems.

Unlike a typical computation-oriented application, large-data analysis often involves searching large, sparse data sets performing very simple computational operations.

To deal with this, the Graph 500 benchmark creates two computational kernels: a large graph that inscribes and links huge numbers of participants and a parallel search of that graph.

"We want to look at the results of ensembles of simulations, or the outputs of big simulations in an automated fashion," Murphy said."The Graph500 is a methodology for doing just that. You can think of them being complementary in that way -- graph problems can be used to figure out what the simulation actually told us."

Performance for these applications is dominated by the ability of the machine to sustain a large number of small, nearly random remote data accesses across its memory system and interconnects, as well as the parallelism available in the machine.

Five problems for these computational kernels could be cybersecurity, medical informatics, data enrichment, social networks and symbolic networks:

  • Cybersecurity: Large enterprises may create 15 billion log entries per day and require a full scan.
  • Medical informatics: There are an estimated 50 million patient records, with 20 to 200 records per patient, resulting in billions of individual pieces of information, all of which need entity resolution: in other words, which records belong to her, him or somebody else.
  • Data enrichment: Petascale data sets include maritime domain awareness with hundreds of millions of individual transponders, tens of thousands of ships, and tens of millions of pieces of individual bulk cargo. These problems also have different types of input data.
  • Social networks: Almost unbounded, like Facebook.
  • Symbolic networks: Often petabytes in size. One example is the human cortex, with 25 billion neurons and approximately 7,000 connections each.

"Many of us on the steering committee believe that these kinds of problems have the potential to eclipse traditional physics-based HPC {high performance computing} over the next decade," Murphy said.

While general agreement exists that complex simulations work well for the physical sciences, where lab work and simulations play off each other, there is some doubt they can solve social problems that have essentially infinite numbers of components. These include terrorism, war, epidemics and societal problems.

"These are exactly the areas that concern me," Murphy said."There's been good graph-based analysis of pandemic flu. Facebook shows tremendous social science implications. Economic modeling this way shows promise.

"We're all engineers and we don't want to over-hype or over-promise, but there's real excitement about these kinds of big data problems right now," he said."We see them as an integral part of science, and the community as a whole is slowly embracing that concept.

"However, it's so new we don't want to sound as if we're hyping the cure to all scientific ills. We're asking, 'What could a computer provide us?' and we know we're ignoring the human factors in problems that may stump the fastest computer. That'll have to be worked out."


Source

Tuesday, November 23, 2010

Supercomputing Center Breaks the Petaflops Barrier

NERSC's newest supercomputer, a 153,408 processor-core Cray XE6 system, posted a performance of 1.05 petaflops (quadrillions of calculations per second) running the Linpack benchmark. In keeping with NERSC's tradition of naming computers for renowned scientists, the system is named Hopper in honor of Admiral Grace Hopper, a pioneer in software development and programming languages.

NERSC serves one of the largest research communities of all supercomputing centers in the United States. The center's supercomputers are used to tackle a wide range of scientific challenges, including global climate change, combustion, clean energy, new materials, astrophysics, genomics, particle physics and chemistry. The more than 400 projects being addressed by NERSC users represent the research mission areas of DOE's Office of Science.

The increasing power of supercomputers helps scientists study problems in greater detail and with greater accuracy, such as increasing the resolution of climate models and creating models of new materials with thousands of atoms. Supercomputers are increasingly used to compliment scientific experimentation by allowing researchers to test theories using computational models and analyzed large scientific data sets. NERSC is also home to Franklin, a 38,128 core Cray XT4 supercomputer with a Linpack performance of 266 teraflops (trillions of calculations per second). Franklin is ranked number 27 on the newest TOP500 list.

The system, installed d in September 2010, is funded by DOE's Office of Advanced Scientific Computing Research.


Source

Making Stars: How Cosmic Dust and Gas Shape Galaxy Evolution

"Formation of galaxies is one of the biggest remaining questions in astrophysics," said Andrey Kravtsov, associate professor in astronomy& astrophysics at the University of Chicago.

Astrophysicists are moving closer to answering that question, thanks to a combination of new observations and supercomputer simulations, including those conducted by Kravtsov and Nick Gnedin, a physicist at Fermi National Accelerator Laboratory.

Gnedin and Kravtsov published new results based on their simulations in the May 1, 2010 issue ofThe Astrophysical Journal, explaining why stars formed more slowly in the early history of the universe than they did much later. The paper quickly came to the attention of Robert C. Kennicutt Jr., director of the University of Cambridge's Institute of Astronomy and co-discoverer of one of the key observational findings about star formation in galaxies, known as the Kennicutt-Schmidt relation.

In the June 3, 2010 issue ofNature, Kennicutt noted that the recent spate of observations and theoretical simulations bodes well for the future of astrophysics. In theirAstrophysical Journalpaper, Kennicutt wrote,"Gnedin and Kravtsov take a significant step in unifying these observations and simulations, and provide a prime illustration of the recent progress in the subject as a whole."

Star-formation law

Kennicutt's star-formation law relates the amount of gas in galaxies in a given area to the rate at which it turns into stars over the same area. The relation has been quite useful when applied to galaxies observed late in the history of the universe, but recent observations by Arthur Wolfe of the University of California, San Diego, and Hsiao-Wen Chen, assistant professor in astronomy and astrophysics at UChicago, indicate that the relation fails for galaxies observed during the first two billion years following the big bang.

Gnedin and Kravtsov's work successfully explains why."What it shows is that at early stages of evolution, galaxies were much less efficient in converting their gas into stars," Kravtsov said.

Stellar evolution leads to increasing abundance of dust, as stars produce elements heavier than helium, including carbon, oxygen, and iron, which are key elements in dust particles.

"Early on, galaxies didn't have enough time to produce a lot of dust, and without dust it's very difficult to form these stellar nurseries," Kravtsov said."They don't convert the gas as efficiently as galaxies today, which are already quite dusty."

The star-formation process begins when interstellar gas clouds become increasingly dense. At some point the hydrogen and helium atoms start combining to form molecules in certain cold regions of these clouds. A hydrogen molecule forms when two hydrogen atoms join. They do so inefficiently in empty space, but find each other more readily on the surface of a cosmic dust particle.

"The biggest particles of cosmic dust are like the smallest particles of sand on good beaches in Hawaii," Gnedin said.

These hydrogen molecules are fragile and easily destroyed by the intense ultraviolet light emitted from massive young stars. But in some galactic regions dark clouds, so-called because of the dust they contain, form a protective layer that protects the hydrogen molecules from the destructive light of other stars.

Stellar nurseries

"I like to think about stars as being very bad parents, because they provide a bad environment for the next generation," Gnedin joked. The dust therefore provides a protective environment for stellar nurseries, Kravtsov noted.

"There is a simple connection between the presence of dust in this diffuse gas and its ability to form stars, and that's something that we modeled for the first time in these galaxy-formation simulations," Kravtsov said."It's very plausible, but we don't know for sure that that's exactly what's happening."

The Gnedin-Kravtsov model also provides a natural explanation for why spiral galaxies predominately fill the sky today, and why small galaxies form stars slowly and inefficiently.

"We usually see very thin disks, and those types of systems are very difficult to form in galaxy-formation simulations," Kravtsov said.

That's because astrophysicists have assumed that galaxies formed gradually through a series of collisions. The problem: simulations show that when galaxies merge, they form spheroidal structures that look more elliptical than spiral.

But early in the history of the universe, cosmic gas clouds were inefficient at making stars, so they collided before star formation occurred."Those types of mergers can create a thin disk," Kravtsov said.

As for small galaxies, their lack of dust production could account for their inefficient star formation."All of these separate pieces of evidence that existed somehow all fell into one place," Gnedin observed."That's what I like as a physicist because physics, in general, is an attempt to understand unifying principles behind different phenomena."

More work remains to be done, however, with input from newly arrived postdoctoral fellows at UChicago and more simulations to be performed on even more powerful supercomputers."That's the next step," Gnedin said.


Source