Friday, December 31, 2010

New Cognitive Robotics Lab Tests Theories of Human Thought

"The real world has a lot of inconsistency that humans handle almost without noticing -- for example, we walk on uneven terrain, we see in shifting light," said Professor Vladislav Daniel Veksler, who is currently teaching Cognitive Robotics."With robots, we can see the problems humans face when navigating their environment."

Cognitive Robotics marries the study of cognitive science -- how the brain represents and transforms information -- with the challenges of a physical environment. Advances in cognitive robotics transfer to artificial intelligence, which seeks to develop more efficient computer systems patterned on the versatility of human thought.

Professor Bram Van Heuveln, who organized the lab, said cognitive scientists have developed a suite of elements -- perception/action, planning, reasoning, memory, decision-making -- that are believed to constitute human thought. When properly modeled and connected, those elements are capable of solving complex problems without the raw power required by precise mathematical computations.

"Suppose we wanted to build a robot to catch fly balls in an outfield. There are two approaches: one uses a lot of calculations -- Newton's law, mechanics, trigonometry, calculus -- to get the robot to be in the right spot at the right time," said Van Heuveln."But that's not the way humans do it. We just keep moving toward the ball. It's a very simple solution that doesn't involve a lot of computation but it gets the job done."

Robotics are an ideal testing ground for that principle because robots act in the real world, and a correct cognitive solution will withstand the unexpected variables presented by the real world.

"The physical world can help us to drive science because it's different from any simulated world we could come up with -- the camera shakes, the motors slip, there's friction, the light changes," Veksler said."This platform -- robotics -- allows us to see that you can't rely on calculations. You have to be adaptive."

The lab is open to all students at Rensselaer. In its first semester, the lab has largely attracted computer science and cognitive science students enrolled in a Cognitive Robotics course taught by Veksler, but Veksler and Van Heuveln hope it will attract more engineering and art students as word of the facility spreads.

"We want different students together in one space -- a place where we can bring the different disciplines and perspectives together," said Van Heuveln."I would like students to use this space for independent research: they come up with the research project, they say 'let's look at this.'"

The lab is equipped with five"Create" robots -- essentially a Roomba robotic vacuum cleaner paired with a laptop; three hand-eye systems; one Chiara (which looks like a large metal crab); and 10 LEGO robots paired with the Sony Handy Board robotic controller.

On a recent day, Jacqui Brunelli and Benno Lee were working on their robot"cat" and"mouse" pair, which try to chase and evade each other respectively; Shane Reilly was improving the computer"vision" of his robotic arm; and Ben Ball was programming his robot to maintain a fixed distance from a pink object waved in front of its"eye."

"The thing that I've learned is that the sensor data isn't exact -- what it 'sees' constantly changes by a few pixels -- and to try to go by that isn't going to work," said Ball, a junior and student of computer science and physics.

Ball said he is trying to pattern his robot on a more human approach.

"We don't just look at an object and walk toward it. We check our position, adjusting our course," Ball said."I need to devise an iterative approach where the robot looks at something, then moves, then looks again to check its results."

The work of the students, who program their robots with the Tekkotsu open-source software, could be applied in future projects, said Van Heuveln.

"As a cognitive scientist, I want this to be built on elements that are cognitively plausible and that are recyclable -- parts of cognition that I can apply to other solutions as well," said Van Heuveln."To me, that's a heck of a lot more interesting than the computational solution."

In a generic domain, their early investigations clearly show how a more cognitive approach employing limited resources can easily outpace more powerful computers using a brute force approach, said Veksler.

"We look to humans not just because we want to simulate what we do, which is an interesting problem in itself, but also because we're smart," said Veksler."Some of the things we have, like limited working memory -- which may seem like a bad thing -- are actually optimal for solving problems in our environment. If you remembered everything, how would you know what's important?"


Source

Tuesday, December 7, 2010

Using Chaos to Model Geophysical Phenomena

"Geophysical phenomena are still not fully understood, especially in turbulent regimes," explains Gary Froyland at the School of Mathematics and Statistics and the Australian Research Council Centre of Excellence for Mathematics and Statistics of Complex Systems (MASCOS) at the University of New South Wales in Australia.

"Nevertheless, it is very important that scientists can quantify the 'transport' properties of these geophysical systems: Put very simply, how does a packet of air or water get from A to B, and how large are these packets? An example of one of these packets is the Antarctic polar vortex, a rotating mass of air in the stratosphere above Antarctica that traps chemicals such as ozone and chlorofluorocarbons (CFCs), exacerbating the effect of the CFCs on the ozone hole," Froyland says.

In the American Institute of Physics' journalCHAOS, Froyland and his research team, including colleague Adam Monahan from the School of Earth and Ocean Sciences at the University of Victoria in Canada, describe how they developed the first direct approach for identifying these packets, called"coherent sets" due to their nondispersive properties.

This technique is based on so-called"transfer operators," which represent a complete description of the ensemble evolution of the fluid. The transfer operator approach is very simple to implement, they say, requiring only singular vector computations of a matrix of transitions induced by the dynamics.

When tested using European Centre for Medium Range Weather Forecasting (ECMWF) data, they found that their new methodology was significantly better than existing technologies for identifying the location and transport properties of the vortex.

The transport operator methodology has myriad applications in atmospheric science and physical oceanography to discover the main transport pathways in the atmosphere and oceans, and to quantify the transport."As atmosphere-ocean models continue to increase in resolution with improved computing power, the analysis and understanding of these models with techniques such as transfer operators must be undertaken beyond pure simulation," says Froyland.

Their next application will be the Agulhas rings off the South African coast, because the rings are responsible for a significant amount of transport of warm water and salt between the Indian and Atlantic Oceans.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


Source

Thursday, December 2, 2010

New Psychology Theory Enables Computers to Mimic Human Creativity

Solving this"insight problem" requires creativity, a skill at which humans excel (the coin is a fake --"B.C." and Arabic numerals did not exist at the time) and computers do not. Now, a new explanation of how humans solve problems creatively -- including the mathematical formulations for facilitating the incorporation of the theory in artificial intelligence programs -- provides a roadmap to building systems that perform like humans at the task.

Ron Sun, Rensselaer Polytechnic Institute professor of cognitive science, said the new"Explicit-Implicit Interaction Theory," recently introduced in an article inPsychological Review, could be used for future artificial intelligence.

"As a psychological theory, this theory pushes forward the field of research on creative problem solving and offers an explanation of the human mind and how we solve problems creatively," Sun said."But this model can also be used as the basis for creating future artificial intelligence programs that are good at solving problems creatively."

The paper, titled"Incubation, Insight, and Creative Problem Solving: A Unified Theory and a Connectionist Model," by Sun and Sèbastien Hèlie of University of California, Santa Barbara, appeared in the July edition ofPsychological Review. Discussion of the theory is accompanied by mathematical specifications for the"CLARION" cognitive architecture -- a computer program developed by Sun's research group to act like a cognitive system -- as well as successful computer simulations of the theory.

In the paper, Sun and Hèlie compare the performance of the CLARION model using"Explicit-Implicit Interaction" theory with results from previous human trials -- including tests involving the coin question -- and found results to be nearly identical in several aspects of problem solving.

In the tests involving the coin question, human subjects were given a chance to respond after being interrupted either to discuss their thought process or to work on an unrelated task. In that experiment, 35.6 percent of participants answered correctly after discussing their thinking, while 45.8 percent of participants answered correctly after working on another task.

In 5,000 runs of the CLARION program set for similar interruptions, CLARION answered correctly 35.3 percent of the time in the first instance, and 45.3 percent of the time in the second instance.

"The simulation data matches the human data very well," said Sun.

Explicit-Implicit Interaction theory is the most recent advance on a well-regarded outline of creative problem solving known as"Stage Decomposition," developed by Graham Wallas in his seminal 1926 book"The Art of Thought." According to stage decomposition, humans go through four stages -- preparation, incubation, insight (illumination), and verification -- in solving problems creatively.

Building on Wallas' work, several disparate theories have since been advanced to explain the specific processes used by the human mind during the stages of incubation and insight. Competing theories propose that incubation -- a period away from deliberative work -- is a time of recovery from fatigue of deliberative work, an opportunity for the mind to work unconsciously on the problem, a time during which the mind discards false assumptions, or a time in which solutions to similar problems are retrieved from memory, among other ideas.

Each theory can be represented mathematically in artificial intelligence models. However, most models choose between theories rather than seeking to incorporate multiple theories and therefore they are fragmentary at best.

Sun and Hèlie's Explicit-Implicit Interaction (EII) theory integrates several of the competing theories into a larger equation.

"EII unifies a lot of fragmentary pre-existing theories," Sun said."These pre-existing theories only account for some aspects of creative problem solving, but not in a unified way. EII unifies those fragments and provides a more coherent, more complete theory."

The basic principles of EII propose the coexistence of two different types of knowledge and processing: explicit and implicit. Explicit knowledge is easier to access and verbalize, can be rendered symbolically, and requires more attention to process. Implicit knowledge is relatively inaccessible, harder to verbalize, and is more vague and requires less attention to process.

In solving a problem, explicit knowledge could be the knowledge used in reasoning, deliberately thinking through different options, while implicit knowledge is the intuition that gives rise to a solution suddenly. Both types of knowledge are involved simultaneously to solve a problem and reinforce each other in the process. By including this principle in each step, Sun was able to achieve a successful system.

"This tells us how creative problem solving may emerge from the interaction of explicit and implicit cognitive processes; why both types of processes are necessary for creative problem solving, as well as in many other psychological domains and functionalities," said Sun.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


Source

Wednesday, December 1, 2010

Genomic Fault Zones Come and Go: Fragile Regions in Mammalian Genomes Go Through 'Birth and Death' Process

"The genomic architecture of every species on Earth changes on the evolutionary time scale and humans are not an exception. What will be the next big change in the human genome remains unknown, but our approach could be useful in determining where in the human genome those changes may occur," said Pavel Pevzner, a UC San Diego computer science professor and an author on the new study. Pevzner studies genomes and genome evolution from a computational perspective in the Department of Computer Science and Engineering at the UC San Diego Jacobs School of Engineering.

The fragile regions of genomes are prone to"genomic earthquakes" that can trigger chromosome rearrangements, disrupt genes, alter gene regulation and otherwise play an important role in genome evolution and the emergence of new species. For example, humans have 23 chromosomes while some other apes have 24 chromosomes, a consequence of a genome rearrangement that fused two chromosomes in our ape ancestor into human chromosome 2.

This work was performed by Pevzner and Max Alekseyev -- a computer scientist who recently finished his Ph.D. in the Department of Computer Science and Engineering at the UC San Diego Jacobs School of Engineering. Alekseyev is now a computer science professor at the University of South Carolina.

Turnover Fragile Breakage Model

"The main conclusion of the new paper is that these fragile regions are moving," said Pevzner.

In 2003, Pevzner and UC San Diego mathematics professor Glen Tesler published results claiming that genomes have"fault zones" or genomic regions that are more prone to rearrangements than other regions. Their"Fragile Breakage Model" countered the then largely accepted"Random Breakage Model" -- which implies that there are no rearrangement hotspots in mammalian genomes. While the Fragile Breakage Model has been supported by many studies in the last seven years, the precise locations of fragile regions in the human genome remain elusive.

The new work published inGenome Biologyoffers an update to the Fragile Breakage Model called the"Turnover Fragile Breakage Model." The findings demonstrate that the fragile regions undergo a birth and death process over evolutionary timescales and provide a clue to where the fragile regions in the human genome are located.

Do the Math: Find Fragile Regions

Finding the fragile regions within genomes is akin to looking at a mixed up deck of cards and trying to determine how many times it has been shuffled.

Looking at a genome, you may identify breaks, but to say it is a fragile region, you have to know that breaks occurred more than once at the same genomic position."We are figuring out which regions underwent multiple genome earthquakes by analyzing the present-day genomes that survived these earthquakes that happened millions of years ago. The notion of rearrangements cannot be applied to a single genome at a single point in time. It's relevant when looking at more than one genome," said Pevzner, explaining the comparative genomics approach they took.

"It was noticed that while fragile regions may be shared across different genomes, most often such shared fragile regions are found in evolutionarily close genomes. This observation led us to a conclusion that fragility of any particular genomic position may appear only for a limited amount of time. The newly proposed Turnover Fragile Breakage Model postulates that fragile regions are subject to a 'birth and death' process and thus have limited lifespan," explained Alekseyev.

The Turnover Fragile Breakage Model suggests that genome rearrangements are more likely to occur at the sites where rearrangements have recently occurred -- and that these rearrangement sites change over tens of millions of years. Thus, the best clue to the current locations of fragile regions in the human genome is offered by rearrangements that happened in our closest ancestors -- chimpanzee and other primates.

Pevzner is eagerly awaiting sequenced primate genomes from the Genome 10K Project. Sequencing the genomes of 10,000 vertebrate species -- including 100s of primates -- is bound to provide new insights on human evolutionary history and possibly even the future rearrangements in the human genome.

"The most likely future rearrangements in human genome will happen at the sites that were recently disrupted in primates," said Pevzner.

Work tied to the new Turnover Fragile Breakage Model may also be useful for understanding genome rearrangements at the level of individuals, rather than entire species. In the future, the computer scientists hope to use similar tools to look at the chromosomal rearrangements that occur within the cells of individual cancer patients over and over again in order to develop new cancer diagnostics and drugs.

Pavel Pevzner is the Ronald R. Taylor Professor of Computer Science at UC San Diego; Director of the NIH Center for Computational Mass Spectrometry; and a Howard Hughes Medical Institute (HHMI) Professor.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


Source

Thursday, November 25, 2010

'Racetrack' Magnetic Memory Could Make Computer Memory 100,000 Times Faster

Annoyed by how long it took his computer to boot up, Kläui began to think about an alternative. Hard disks are cheap and can store enormous quantities of data, but they are slow; every time a computer boots up, 2-3 minutes are lost while information is transferred from the hard disk into RAM (random access memory). The global cost in terms of lost productivity and energy consumption runs into the hundreds of millions of dollars a day.

Like the tried and true VHS videocassette, the proposed solution involves data recorded on magnetic tape. But the similarity ends there; in this system the tape would be a nickel-iron nanowire, a million times smaller than the classic tape. And unlike a magnetic videotape, in this system nothing moves mechanically. The bits of information stored in the wire are simply pushed around inside the tape using a spin polarized current, attaining the breakneck speed of several hundred meters per second in the process. It's like reading an entire VHS cassette in less than a second.

In order for the idea to be feasible, each bit of information must be clearly separated from the next so that the data can be read reliably. This is achieved by using domain walls with magnetic vortices to delineate two adjacent bits. To estimate the maximum velocity at which the bits can be moved, Kläui and his colleagues* carried out measurements on vortices and found that the physical mechanism could allow for possible higher access speeds than expected.

Their results were published online October 25, 2010, in the journalPhysical Review Letters. Scientists at the Zurich Research Center of IBM (which is developing a racetrack memory) have confirmed the importance of the results in a Viewpoint article. Millions or even billions of nanowires would be embedded in a chip, providing enormous capacity on a shock-proof platform. A market-ready device could be available in as little as 5-7 years.

Racetrack memory promises to be a real breakthrough in data storage and retrieval. Racetrack-equipped computers would boot up instantly, and their information could be accessed 100,000 times more rapidly than with a traditional hard disk. They would also save energy. RAM needs to be powered every millionth of a second, so an idle computer consumes up to 300 mW just maintaining data in RAM. Because Racetrack memory doesn't have this constraint, energy consumption could be slashed by nearly a factor of 300, to a few mW while the memory is idle. It's an important consideration: computing and electronics currently consumes 6% of worldwide electricity, and is forecast to increase to 15% by 2025.


Source

Wednesday, November 24, 2010

New Standard Proposed for Supercomputing

The rating system, Graph500, tests supercomputers for their skill in analyzing large, graph-based structures that link the huge numbers of data points present in biological, social and security problems, among other areas.

"By creating this test, we hope to influence computer makers to build computers with the architecture to deal with these increasingly complex problems," Sandia researcher Richard Murphy said.

Rob Leland, director of Sandia's Computations, Computers, and Math Center, said,"The thoughtful definition of this new competitive standard is both subtle and important, as it may heavily influence computer architecture for decades to come."

The group isn't trying to compete with Linpack, the current standard test of supercomputer speed, Murphy said."There have been lots of attempts to supplant it, and our philosophy is simply that it doesn't measure performance for the applications we need, so we need another, hopefully complementary, test," he said.

Many scientists view Linpack as a"plain vanilla" test mechanism that tells how fast a computer can perform basic calculations, but has little relationship to the actual problems the machines must solve.

The impetus to achieve a supplemental test code came about at"an exciting dinner conversation at Supercomputing 2009," said Murphy."A core group of us recruited other professional colleagues, and the effort grew into an international steering committee of over 30 people." 

Many large computer makers have indicated interest, said Murphy, adding there's been buy-in from Intel, IBM, AMD, NVIDIA, and Oracle corporations."Whether or not they submit test results remains to be seen, but their representatives are on our steering committee."

Each organization has donated time and expertise of committee members, he said.

While some computer makers and their architects may prefer to ignore a new test for fear their machine will not do well, the hope is that large-scale demand for a more complex test will be a natural outgrowth of the greater complexity of problems.

Studies show that moving data around (not simple computations) will be the dominant energy problem on exascale machines, the next frontier in supercomputing, and the subject of a nascent U.S. Department of Energy initiative to achieve this next level of operations within a decade, Leland said. (Petascale and exascale represent 10 to the 15thand 18thpowers, respectively, operations per second.)

Part of the goal of the Graph500 list is to point out that in addition to more expense in data movement, any shift in application base from physics to large-scale data problems is likely to further increase the application requirements for data movement, because memory and computational capability increase proportionally. That is, an exascale computer requires an exascale memory.

"In short, we're going to have to rethink how we build computers to solve these problems, and the Graph500 is meant as an early stake in the ground for these application requirements," said Murphy.

How does it work?

Large data problems are very different from ordinary physics problems.

Unlike a typical computation-oriented application, large-data analysis often involves searching large, sparse data sets performing very simple computational operations.

To deal with this, the Graph 500 benchmark creates two computational kernels: a large graph that inscribes and links huge numbers of participants and a parallel search of that graph.

"We want to look at the results of ensembles of simulations, or the outputs of big simulations in an automated fashion," Murphy said."The Graph500 is a methodology for doing just that. You can think of them being complementary in that way -- graph problems can be used to figure out what the simulation actually told us."

Performance for these applications is dominated by the ability of the machine to sustain a large number of small, nearly random remote data accesses across its memory system and interconnects, as well as the parallelism available in the machine.

Five problems for these computational kernels could be cybersecurity, medical informatics, data enrichment, social networks and symbolic networks:

  • Cybersecurity: Large enterprises may create 15 billion log entries per day and require a full scan.
  • Medical informatics: There are an estimated 50 million patient records, with 20 to 200 records per patient, resulting in billions of individual pieces of information, all of which need entity resolution: in other words, which records belong to her, him or somebody else.
  • Data enrichment: Petascale data sets include maritime domain awareness with hundreds of millions of individual transponders, tens of thousands of ships, and tens of millions of pieces of individual bulk cargo. These problems also have different types of input data.
  • Social networks: Almost unbounded, like Facebook.
  • Symbolic networks: Often petabytes in size. One example is the human cortex, with 25 billion neurons and approximately 7,000 connections each.

"Many of us on the steering committee believe that these kinds of problems have the potential to eclipse traditional physics-based HPC {high performance computing} over the next decade," Murphy said.

While general agreement exists that complex simulations work well for the physical sciences, where lab work and simulations play off each other, there is some doubt they can solve social problems that have essentially infinite numbers of components. These include terrorism, war, epidemics and societal problems.

"These are exactly the areas that concern me," Murphy said."There's been good graph-based analysis of pandemic flu. Facebook shows tremendous social science implications. Economic modeling this way shows promise.

"We're all engineers and we don't want to over-hype or over-promise, but there's real excitement about these kinds of big data problems right now," he said."We see them as an integral part of science, and the community as a whole is slowly embracing that concept.

"However, it's so new we don't want to sound as if we're hyping the cure to all scientific ills. We're asking, 'What could a computer provide us?' and we know we're ignoring the human factors in problems that may stump the fastest computer. That'll have to be worked out."


Source

Tuesday, November 23, 2010

Supercomputing Center Breaks the Petaflops Barrier

NERSC's newest supercomputer, a 153,408 processor-core Cray XE6 system, posted a performance of 1.05 petaflops (quadrillions of calculations per second) running the Linpack benchmark. In keeping with NERSC's tradition of naming computers for renowned scientists, the system is named Hopper in honor of Admiral Grace Hopper, a pioneer in software development and programming languages.

NERSC serves one of the largest research communities of all supercomputing centers in the United States. The center's supercomputers are used to tackle a wide range of scientific challenges, including global climate change, combustion, clean energy, new materials, astrophysics, genomics, particle physics and chemistry. The more than 400 projects being addressed by NERSC users represent the research mission areas of DOE's Office of Science.

The increasing power of supercomputers helps scientists study problems in greater detail and with greater accuracy, such as increasing the resolution of climate models and creating models of new materials with thousands of atoms. Supercomputers are increasingly used to compliment scientific experimentation by allowing researchers to test theories using computational models and analyzed large scientific data sets. NERSC is also home to Franklin, a 38,128 core Cray XT4 supercomputer with a Linpack performance of 266 teraflops (trillions of calculations per second). Franklin is ranked number 27 on the newest TOP500 list.

The system, installed d in September 2010, is funded by DOE's Office of Advanced Scientific Computing Research.


Source

Making Stars: How Cosmic Dust and Gas Shape Galaxy Evolution

"Formation of galaxies is one of the biggest remaining questions in astrophysics," said Andrey Kravtsov, associate professor in astronomy& astrophysics at the University of Chicago.

Astrophysicists are moving closer to answering that question, thanks to a combination of new observations and supercomputer simulations, including those conducted by Kravtsov and Nick Gnedin, a physicist at Fermi National Accelerator Laboratory.

Gnedin and Kravtsov published new results based on their simulations in the May 1, 2010 issue ofThe Astrophysical Journal, explaining why stars formed more slowly in the early history of the universe than they did much later. The paper quickly came to the attention of Robert C. Kennicutt Jr., director of the University of Cambridge's Institute of Astronomy and co-discoverer of one of the key observational findings about star formation in galaxies, known as the Kennicutt-Schmidt relation.

In the June 3, 2010 issue ofNature, Kennicutt noted that the recent spate of observations and theoretical simulations bodes well for the future of astrophysics. In theirAstrophysical Journalpaper, Kennicutt wrote,"Gnedin and Kravtsov take a significant step in unifying these observations and simulations, and provide a prime illustration of the recent progress in the subject as a whole."

Star-formation law

Kennicutt's star-formation law relates the amount of gas in galaxies in a given area to the rate at which it turns into stars over the same area. The relation has been quite useful when applied to galaxies observed late in the history of the universe, but recent observations by Arthur Wolfe of the University of California, San Diego, and Hsiao-Wen Chen, assistant professor in astronomy and astrophysics at UChicago, indicate that the relation fails for galaxies observed during the first two billion years following the big bang.

Gnedin and Kravtsov's work successfully explains why."What it shows is that at early stages of evolution, galaxies were much less efficient in converting their gas into stars," Kravtsov said.

Stellar evolution leads to increasing abundance of dust, as stars produce elements heavier than helium, including carbon, oxygen, and iron, which are key elements in dust particles.

"Early on, galaxies didn't have enough time to produce a lot of dust, and without dust it's very difficult to form these stellar nurseries," Kravtsov said."They don't convert the gas as efficiently as galaxies today, which are already quite dusty."

The star-formation process begins when interstellar gas clouds become increasingly dense. At some point the hydrogen and helium atoms start combining to form molecules in certain cold regions of these clouds. A hydrogen molecule forms when two hydrogen atoms join. They do so inefficiently in empty space, but find each other more readily on the surface of a cosmic dust particle.

"The biggest particles of cosmic dust are like the smallest particles of sand on good beaches in Hawaii," Gnedin said.

These hydrogen molecules are fragile and easily destroyed by the intense ultraviolet light emitted from massive young stars. But in some galactic regions dark clouds, so-called because of the dust they contain, form a protective layer that protects the hydrogen molecules from the destructive light of other stars.

Stellar nurseries

"I like to think about stars as being very bad parents, because they provide a bad environment for the next generation," Gnedin joked. The dust therefore provides a protective environment for stellar nurseries, Kravtsov noted.

"There is a simple connection between the presence of dust in this diffuse gas and its ability to form stars, and that's something that we modeled for the first time in these galaxy-formation simulations," Kravtsov said."It's very plausible, but we don't know for sure that that's exactly what's happening."

The Gnedin-Kravtsov model also provides a natural explanation for why spiral galaxies predominately fill the sky today, and why small galaxies form stars slowly and inefficiently.

"We usually see very thin disks, and those types of systems are very difficult to form in galaxy-formation simulations," Kravtsov said.

That's because astrophysicists have assumed that galaxies formed gradually through a series of collisions. The problem: simulations show that when galaxies merge, they form spheroidal structures that look more elliptical than spiral.

But early in the history of the universe, cosmic gas clouds were inefficient at making stars, so they collided before star formation occurred."Those types of mergers can create a thin disk," Kravtsov said.

As for small galaxies, their lack of dust production could account for their inefficient star formation."All of these separate pieces of evidence that existed somehow all fell into one place," Gnedin observed."That's what I like as a physicist because physics, in general, is an attempt to understand unifying principles behind different phenomena."

More work remains to be done, however, with input from newly arrived postdoctoral fellows at UChicago and more simulations to be performed on even more powerful supercomputers."That's the next step," Gnedin said.


Source