Saturday, January 22, 2011

Beating the Competition: Scientists Discover How the Size of Networks Can Skyrocket

The results, which are published inNature Physics, were part of a theoretical study carried out by researchers from the Max Planck Institute for Dynamics and Self-Organization (MPIDS), the Bernstein Center for Computational Neuroscience Göttingen and the University Göttingen. This study mathematically describes for the first time the influence of single additional links in a network.

Imagine the following scenario: In your sports team you get to know a new player and arrange to go out and see a movie on the next weekend. The new team member brings along three friends -- and suddenly by adding one new contact, your own circle of friends has grown by four people. Growth processes of this sort occur in many networks: Neurons in the brain constantly establish new connections, websites link to each other and a person travelling infected with influenza creates a network of infected places with each intermediate stop. From a scientist's point of view, such growth processes are still poorly understood: How does a network change when single links are added? How quickly does a network grow in this way?

To answer these questions, the scientists from Göttingen tracked the growth of networks link by link. A new connection, however, can not only add one new element. It can also merge two networks (as in the example in the sports team above). The researchers focused on a special form of network growth that introduces a form of competition between possible links: If several new connections are possible, only the one connection is created: the one that results in the smallest new network.

"There is evidence, that growing networks of neurons at first prefer forming small groups and thus roughly follow the growth process we discuss," says Jan Nagler, staff researcher at the University of Göttingen and the MPIDS.

The situation can be compared to the social contacts established in a summer camp for children, whose participants all don't know each other at the beginning of their vacation. Most likely, the children will at first team up in small groups and pairs. If such a pair wants to expand its social circle, it typically proceeds cautiously, approaching another pair or a small group rather than a large clique. At the beginning of the vacation, the social networks within the camp therefore grow slowly. At the end, all children will have become acquainted: The network has then reached its largest possible size and connects all elements of the system.

"In our study we zoomed in on an intermediate growth phase. This phase arises after the elements have begun to sporadically connect into small groups, but before the entire system is linked," explains Marc Timme, head of the Network Dynamics Group at MPIDS. How do the many small networks link to form a whole? Are several large networks created at the same time or does one dominant network develop that towers above the others? In addition to performing computer simulations, the Göttingen researchers were for the first time able to derive mathematical expressions that describe this growth phase link by link.

The scientists found that after a certain number of new links, a sudden growth spurt occurs: The size of the largest network within the system is enhanced dramatically."With respect to the size of the system, this jump is more dramatic in small systems than in large ones," says Nagler. However even in systems that consist of a huge number of elements -- comparable for example to the number of neurons in the brain -- the size of the largest network can double."At first, many networks of moderate size develop in this way," says Timme. Thus, a dominant spanning network emerges only at a late stage in the growth process.

In a next step, the researchers now want to identify which forms of competition in natural systems from biology and physics imply this rapid growth and study the consequences of these growth spurts.


Source

Friday, January 21, 2011

For Robust Robots, Let Them Be Babies First

Or at least that's not too far off from what University of Vermont roboticist Josh Bongard has discovered, as he reports in the January 10 online edition of theProceedings of the National Academy of Sciences.

In a first-of-its-kind experiment, Bongard created both simulated and actual robots that, like tadpoles becoming frogs, change their body forms while learning how to walk. And, over generations, his simulated robots also evolved, spending less time in"infant" tadpole-like forms and more time in"adult" four-legged forms.

These evolving populations of robots were able to learn to walk more rapidly than ones with fixed body forms. And, in their final form, the changing robots had developed a more robust gait -- better able to deal with, say, being knocked with a stick -- than the ones that had learned to walk using upright legs from the beginning.

"This paper shows that body change, morphological change, actually helps us design better robots," Bongard says."That's never been attempted before."

Robots are complex

Bongard's research, supported by the National Science Foundation, is part of a wider venture called evolutionary robotics."We have an engineering goal," he says"to produce robots as quickly and consistently as possible." In this experimental case: upright four-legged robots that can move themselves to a light source without falling over.

"But we don't know how to program robots very well," Bongard says, because robots are complex systems. In some ways, they are too much like people for people to easily understand them.

"They have lots of moving parts. And their brains, like our brains, have lots of distributed materials: there's neurons and there's sensors and motors and they're all turning on and off in parallel," Bongard says,"and the emergent behavior from the complex system which is a robot, is some useful task like clearing up a construction site or laying pavement for a new road." Or at least that's the goal.

But, so far, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured or outdoor environments.

Which is why Bongard, an assistant professor in UVM's College of Engineering and Mathematical Sciences, and other robotics experts have turned to computer programs to design robots and develop their behaviors -- rather than trying to program the robots' behavior directly.

His new work may help.

To the light

Using a sophisticated computer simulation, Bongard unleashed a series of synthetic beasts that move about in a 3-dimensional space."It looks like a modern video game," he says. Each creature -- or, rather, generations of the creatures -- then run a software routine, called a genetic algorithm, that experiments with various motions until it develops a slither, shuffle, or walking gait -- based on its body plan -- that can get it to the light source without tipping over.

"The robots have 12 moving parts," Bongard says."They look like the simplified skeleton of a mammal: it's got a jointed spine and then you have four sticks -- the legs -- sticking out."

Some of the creatures begin flat to the ground, like tadpoles or, perhaps, snakes with legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals.

And why do the generations of robots that progress from slithering to wide legs and, finally, to upright legs, ultimately perform better, getting to the desired behavior faster?

"The snake and reptilian robots are, in essence, training wheels," says Bongard,"they allow evolution to find motion patterns quicker, because those kinds of robots can't fall over. So evolution only has to solve the movement problem, but not the balance problem, initially. Then gradually over time it's able to tackle the balance problem after already solving the movement problem."

Sound anything like how a human infant first learns to roll, then crawl, then cruise along the coffee table and, finally, walk?

"Yes," says Bongard,"We're copying nature, we're copying evolution, we're copying neural science when we're building artificial brains into these robots." But the key point is that his robots don't only evolve their artificial brain -- the neural network controller -- but rather do so in continuous interaction with a changing body plan. A tadpole can't kick its legs, because it doesn't have any yet; it's learning some things legless and others with legs.

And this may help to explain the most surprising -- and useful -- finding in Bongard's study: the changing robots were not only faster in getting to the final goal, but afterward were more able to deal with new kinds of challenges that they hadn't before faced, like efforts to tip them over.

Bongard is not exactly sure why this is, but he thinks it's because controllers that evolved in the robots whose bodies changed over generations learned to maintain the desired behavior over a wider range of sensor-motor arrangements than controllers evolved in robots with fixed body plans. It seem that learning to walk while flat, squat, and then upright, gave the evolving robots resilience to stay upright when faced with new disruptions. Perhaps what a tadpole learns before it has legs makes it better able to use its legs once they grow.

"Realizing adaptive behavior in machines has to date focused on dynamic controllers, but static morphologies," Bongard writes in his PNAS paper"This is an inheritance from traditional artificial intelligence in which computer programs were developed that had no body with which to affect, and be affected by, the world."

"One thing that has been left out all this time is the obvious fact that in nature it's not that the animal's body stays fixed and its brain gets better over time," he says,"in natural evolution animals bodies and brains are evolving together all the time." A human infant, even if she knew how, couldn't walk: her bones and joints aren't up to the task until she starts to experience stress on the foot and ankle.

That hasn't been done in robotics for an obvious reason:"it's very hard to change a robot's body," Bongard says,"it's much easier to change the programming inside its head."

Lego proof

Still, Bongard gave it a try. After running 5000 simulations, each taking 30 hours on the parallel processors in UVM's Vermont Advanced Computing Center --"it would have taken 50 or 100 years on a single machine," Bongard says -- he took the task into the real world.

"We built a relatively simple robot, out of a couple of Lego Mindstorm kits, to demonstrate that you actually could do it," he says. This physical robot is four-legged, like in the simulation, but the Lego creature wears a brace on its front and back legs."The brace gradually tilts the robot," as the controller searches for successful movement patterns, Bongard says,"so that the legs go from horizontal to vertical, from reptile to quadruped.

"While the brace is bending the legs, the controller is causing the robot to move around, so it's able to move its legs, and bend its spine," he says,"it's squirming around like a reptile flat on the ground and then it gradually stands up until, at the end of this movement pattern, it's walking like a coyote."

"It's a very simple prototype," he says,"but it works; it's a proof of concept."


Source

Thursday, January 20, 2011

Robotic Ghost Knifefish Is 'Born'

The robot -- created after observing and creating computer simulations of the black ghost knifefish -- could pave the way for nimble robots that could perform underwater recovery operations or long-term monitoring of coral reefs.

Led by Malcolm MacIver, associate professor of mechanical and biomedical engineering at Northwestern's McCormick School of Engineering and Applied Science, the team's results are published in the Journal of the Royal Society Interface.

The black ghost knifefish, which works at night in rivers of the Amazon basin, hunts for prey using a weak electric field around its entire body and moves both forward and backward using a ribbon-like fin on the underside of its body.

MacIver, a robotics expert who served as a scientific consultant for"Tron: Legacy" and is science advisor for the television series"Caprica," has studied the knifefish for years. Working with Neelesh Patankar, associate professor of mechanical engineering and co-author of the paper, he has created mechanical models of the fish in hopes of better understanding how the nervous system sends messages throughout the body to make it move.

Planning for the robot -- called GhostBot -- began when graduate student Oscar Curet, a co-author of the paper, observed a knifefish suddenly moving vertically in a tank in MacIver's lab.

"We had only tracked it horizontally before," said MacIver, a recent recipient of the Presidential Early Career Award for Scientists and Engineers."We thought, 'How could it be doing this?'"

Further observations revealed that while the fish only uses one traveling wave along the fin during horizontal motion (forward or backward depending on the direction on the wave), while moving vertically it uses two waves. One of these moves from head to tail, and the other moves tail to head. The two waves collide and stop at the center of the fin.

The team then created a computer simulation that showed that when these"inward counterpropagating waves" are generated by the fin, horizontal thrust is canceled and the fluid motion generated by the two waves is funneled into a downward jet from the center of the fin, pushing the body up. The flow structure looks like a mushroom cloud with an inverted jet.

"It's interesting because you're getting force coming off the animal in a completely unexpected direction that allows it to do acrobatics that, given its lifestyle of hunting and maneuvering among tree roots, makes a huge amount of sense," MacIver said.

The group then hired Kinea Design, a design firm founded by Northwestern faculty that specializes in human interactive mechatronics, and worked closely with its co-founder, Michael Peshkin, professor of mechanical engineering, to design and build a robot. The company fashioned a forearm-length waterproof robot with 32 motors giving independent control of the 32 artificial fin rays of the lycra-covered artificial fin. (That means the robot has 32 degrees of freedom. In comparison, industrial robot arms typically have less than 10.) Seven months and$200,000 later, the GhostBot came to life.

The group took the robot to Harvard University to test it in a flow tunnel in the lab of George V. Lauder, professor of ichthyology and co-author of the paper. The team measured the flow around the robotic fish by placing reflective particles in the water, then shining a laser sheet into the water. That allowed them to track the flow of the water by watching the particles, and the test showed the water flowing around the biomimetic robot just as computer simulations predicted it would.

"It worked perfectly the first time," MacIver said."We high-fived. We had the robot in the real world being pushed by real forces."

The robot is also outfitted with an electrosensory system that works similar to the knifefish's, and MacIver and his team hope to next improve the robot so it can autonomously use its sensory signals to detect an object and then use its mechanical system to position itself near the object.

Humans excel at creating high-speed, low-maneuverability technologies, like airplanes and cars, MacIver said. But studying animals provides a platform for creating low-speed, high-maneuverability technologies -- technologies that don't currently exist. Potential applications for such a robot include underwater recovery operations, such as plugging a leaking oil pipe, or long-term monitoring of oceanic environments, such as fragile coral reefs.

While the applied work on the robot moves ahead in the lab, the group is pursuing basic science questions as well."The robot is a tool for uncovering the extremely complicated story of how to coordinate movement in animals," MacIver said."By simulating and then performing the motions of the fish, we're getting insight into the mechanical basis of the remarkable agility of a very acrobatic, non-visual fish. The next step is to take the sensory work and unite the two."


Source

Wednesday, January 19, 2011

Mathematical Model for Moving Bottlenecks in Road Traffic

Such incidents motivate the analysis of traffic to minimize similar events and provide insight into road design and construction, such as where to install traffic lights and toll booths, how many lanes to build, and where to construct an overpass or a tunnel. The goals of these analyses are to relieve congestion in high traffic areas, reduce the risk of accidents, and manage safety and security of motorists.

Not surprisingly, vehicular traffic flow has been tackled by mathematicians, engineers and physicists alike. Mathematical approaches to study traffic are usually based on the speed, density and flow of vehicles on a given roadway. In a paper published this month in theSIAM Journal on Mathematical Analysis, authors Corrado Lattanzio, Amelio Maurizi and Benedetto Piccoli propose a mathematical model of vehicular traffic based on the study of a moving bottleneck caused by a slow-moving vehicle within the flow of cars. The effect of moving bottlenecks on flow of traffic is an important factor in evaluating travel times and traveling paths for commuters.

Many different mathematical models have been proposed to study traffic, including models that use second-order equations for mass and momentum, multipopulation models that factor in the varying characteristics of different kinds of vehicles, and dynamic models that consider traffic flows.

Most of the models so far proposed, however, solve the problem of a single vehicle independently of the entire traffic flow, and so are not completely coupled. An example is a PDE-ODE model that used a partial differential equation to model the flow of traffic while using an ordinary differential equation to determine the position of a single vehicle. Since both could be solved independently, the system did not take into account the influence of the single car on the entire traffic flow.

The paper by Lattanzioet alprovides a fully coupled, multi-scale model in which the microscopic position of a single car is taken together with the macroscopic car density on the road. In this micro-macro model, the dynamics of a moving bottleneck caused by a slow-moving vehicle on a street are used to study the effects of disruptions on the flow of traffic. Mathematically, the problem is solved using the fractional step method. In successive time steps, a PDE is first solved for the density of traffic and then the ODE is solved for the position of the slow-moving vehicle.

By solving the bottleneck problem in a coupled fashion, better transportation designs can be made in anticipation of such inevitable traffic congestion.


Source

Thursday, January 13, 2011

Fruit Fly Nervous System Provides New Solution to Fundamental Computer Network Problem

With a minimum of communication and without advance knowledge of how they are connected with each other, the cells in the fly's developing nervous system manage to organize themselves so that a small number of cells serve as leaders that provide direct connections with every other nerve cell, said author Ziv Bar-Joseph, associate professor of machine learning at Carnegie Mellon University.

The result, the researchers report in the Jan. 14 edition of the journalScience, is the same sort of scheme used to manage the distributed computer networks that perform such everyday tasks as searching the Web or controlling an airplane in flight. But the method used by the fly's nervous system to organize itself is much simpler and more robust than anything humans have concocted.

"It is such a simple and intuitive solution, I can't believe we did not think of this 25 years ago," said co-author Noga Alon, a mathematician and computer scientist at Tel Aviv University and the Institute for Advanced Study in Princeton, N.J.

Bar-Joseph, Alon and their co-authors -- Yehuda Afek of Tel Aviv University and Naama Barkai, Eran Hornstein and Omer Barad of the Weizmann Institute of Science in Rehovot, Israel -- used the insights gained from fruit flies to design a new distributed computing algorithm. They found it has qualities that make it particularly well suited for networks in which the number and position of the nodes is not completely certain. These include wireless sensor networks, such as environmental monitoring, where sensors are dispersed in a lake or waterway, or systems for controlling swarms of robots.

"Computational and mathematical models have long been used by scientists to analyze biological systems," said Bar-Joseph, a member of the Lane Center for Computational Biology in Carnegie Mellon's School of Computer Science."Here we've reversed the strategy, studying a biological system to solve a long-standing computer science problem."

Today's large-scale computer systems and the nervous system of a fly both take a distributive approach to performing tasks. Though the thousands or even millions of processors in a computing system and the millions of cells in a fly's nervous system must work together to complete a task, none of the elements need to have complete knowledge of what's going on, and the systems must function despite failures by individual elements.

In the computing world, one step toward creating this distributive system is to find a small set of processors that can be used to rapidly communicate with the rest of the processors in the network -- what graph theorists call a maximal independent set (MIS). Every processor in such a network is either a leader (a member of the MIS) or is connected to a leader, but the leaders are not interconnected.

A similar arrangement occurs in the fruit fly, which uses tiny bristles to sense the outside world. Each bristle develops from a nerve cell, called a sensory organ precursor (SOP), which connects to adjoining nerve cells, but does not connect with other SOPs.

For three decades, computer scientists have puzzled over how processors in a network can best elect an MIS. The common solutions use a probabilistic method -- similar to rolling dice -- in which some processors identify themselves as leaders, based in part on how many connections they have with other processors. Processors connected to these self-selected leaders take themselves out of the running and, in subsequent rounds, additional processors self-select themselves and the processors connected to them take themselves out of the running. At each round, the chances of any processor joining the MIS (becoming a leader) increases as a function of the number of its connections.

This selection process is rapid, Bar-Joseph said, but it entails lots of complicated messages being sent back and forth across the network, and it requires that all of the processors know in advance how they are connected in the network. That can be a problem for applications such as wireless sensor networks, where sensors might be distributed randomly and all might not be within communication range of each other.

During the larval and pupal stages of a fly's development, the nervous system also uses a probabilistic method to select the cells that will become SOPs. In the fly, however, the cells have no information about how they are connected to each other. As various cells self-select themselves as SOPs, they send out chemical signals to neighboring cells that inhibit those cells from also becoming SOPs. This process continues for three hours, until all of the cells are either SOPs or are neighbors to an SOP, and the fly emerges from the pupal stage.

In the fly, Bar-Joseph noted, the probability that any cell will self-select increases not as a function of connections, as in the typical MIS algorithm for computer networks, but as a function of time. The method does not require advance knowledge of how the cells are arranged. The communication between cells is as simple as can be.

The researchers created a computer algorithm based on the fly's approach and proved that it provides a fast solution to the MIS problem."The run time was slightly greater than current approaches, but the biological approach is efficient and more robust because it doesn't require so many assumptions," Bar-Joseph said."This makes the solution applicable to many more applications."

This research was supported in part by grants from the National Institutes of Health and the National Science Foundation.


Source

Wednesday, January 12, 2011

Played by Humans, Scored by Nature, Online Game Helps Unravel Secrets of RNA

The game, called EteRNA harnesses game play to uncover principles for designing molecules of RNA, which biologists believe may be the key regulator of everything that happens in living cells. But the game doesn't end with the highest computer score. Rather, players are scored and ranked based on how well their virtual designs can be rendered as real, physical molecules. Each week's top designs are synthesized in a biochemistry laboratory so researchers can see if the resulting molecules fold themselves into the three-dimensional shapes predicted by computer models.

"Putting a ball through a hoop or drawing a better poker hand is the way we're used to winning games, but in EteRNA you score when the molecule you've designed can assemble itself," said Adrien Treuille, an assistant professor of computer science at Carnegie Mellon, who leads the EteRNA project with Rhiju Das, an assistant professor of biochemistry at Stanford."Nature provides the final score -- and nature is one tough umpire."

Because EteRNA is crowdsourcing the scientific method -- enlisting non-experts to uncover still-mysterious RNA design principles -- it is essential that scoring be rigorous.

"Nature confounds even our best computer models," said Jeehyung Lee, a computer science Ph.D. student at Carnegie Mellon who led the game's development."We knew that if we were to truly tap the wisdom of crowds, our game would have to expose players to every aspect of the scientific process: design, yes, but also experimentation, analysis of results and incorporation of those results into future designs."

The complex, three-dimensional shape of an RNA molecule is critical to its function. The goal of the EteRNA project is to design RNA knots, polyhedra and other shapes never seen before.

"We want to understand how RNA folds in a test tube and eventually in viruses and living cells," Das said."We also want to create a toolkit of basic building blocks that could be used to construct sensors, therapeutic agents and tiny machines."

By synthesizing a design generated by game play, researchers will learn quickly whether the resulting molecule folds into the predicted shape, or something close to it, or if it even folds at all. Even designs that are not synthesized will be scored by nature, in that their scores will be based on the performance of similar designs previously synthesized.

"These experiments are the first-line strategy for validating a design and a crucial part of the scientific method," said Das, whose lab at Stanford synthesizes the molecules."This makes EteRNA similar to what goes on in my lab on a daily basis: You make a prediction, do an experiment, make adjustments and start again." Initially, Das' lab is synthesizing eight designs each week, but is ramping up to synthesize about 100 a week.

RNA, or ribonucleic acid, long has been recognized as a messenger for genetic information, yet its role usually was overshadowed by DNA, which encodes genes, and by proteins, which do the work of the cell. But biologists now suspect RNA plays a much broader role as the regulator of cells, acting much like the operating system of a computer. Understanding RNA design could prove useful for treating or controlling such diseases as HIV, for creating RNA-based sensors and even for building computers out of RNA.

The game employs state-of-the-art simulation software that players use to generate designs. It includes training exercises and challenge puzzles for honing skills, as well as challenges for designing molecules that will be synthesized.

In its use of game play to generate results of scientific interest, EteRNA is similar to other online games such as Foldit, an online protein-folding game that Treuille helped create while at the University of Washington. In fact, Treuille and Das met when they sat at adjacent desks in the Washington biochemistry lab of David Baker, where Treuille was working on Foldit and Das was studying RNA and protein folding and occasionally offering advice.

Both men recognized that the lack of real-world feedback was a limitation of these games. They realized an RNA design game could solve this problem because RNA, unlike many biological molecules, can be readily synthesized in a matter of hours.

RNA consists of long, double strands of four bases -- adenine, guanine, cytosine and uracil -- with the shape determined by the sequence of the bases. The rules controlling shape are relatively simple, but the sheer size of the molecules greatly complicates the design process.

"We've already found it's better not to use regularly repeating sequences of bases because they prove unstable," Treuille said, based on play by beta testers."We're trying to build things that work in nature, and nature favors solutions that are robust."

The game is integrated with Facebook, so players can post accomplishments to their Facebook wall automatically and can create groups that talk about play and compete with each other.

The first challenges are relatively simple, arbitrary shapes, Das said, but will soon begin to incorporate designs of scientific relevance, such as RNA switches that could be used to sense and respond to other molecules in living cells.

Ultimately, players may end up creating designs and making discoveries of their own."They're already beginning to act like a scientific community," Treuille said."One player solved a puzzle that a widely used algorithm could not. Another player has written a strategy guide that proposes an algorithm for solving design problems that is different and simpler than anything in the scientific literature."

The EteRNA project is funded by a grant from the National Science Foundation.

For more information on EterRNA watch these video clips:


Source