Wednesday, May 18, 2011

Which Technologies Get Better Faster?

In a nutshell, the researchers found that the greater a technology's complexity, the more slowly it changes and improves over time. They devised a way of mathematically modeling complexity, breaking a system down into its individual components and then mapping all the interconnections between these components.

"It gives you a way to think about how the structure of the technology affects the rate of improvement," says Jessika Trancik, assistant professor of engineering systems at MIT. Trancik wrote the paper with James McNerney, a graduate student at Boston University (BU); Santa Fe Institute Professor Doyne Farmer; and BU physics professor Sid Redner. It appears online this week in theProceedings of the National Academy of Sciences.

The team was inspired by the complexity of energy-related technologies ranging from tiny transistors to huge coal-fired powerplants. They have tracked how these technologies improve over time, either through reduced cost or better performance, and, in this paper, develop a model to compare that progress to the complexity of the design and the degree of connectivity among its different components.

The authors say the approach they devised for comparing technologies could, for example, help policymakers mitigate climate change: By predicting which low-carbon technologies are likeliest to improve rapidly, their strategy could help identify the most effective areas to concentrate research funding. The analysis makes it possible to pick technologies"not just so they will work well today, but ones that will be subject to rapid development in the future," Trancik says.

Besides the importance of overall design complexity in slowing the rate of improvement, the researchers also found that certain patterns of interconnection can create bottlenecks, causing the pace of improvements to come in fits and starts rather than at a steady rate.

"In this paper, we develop a theory that shows why we see the rates of improvement that we see," Trancik says. Now that they have developed the theory, she and her colleagues are moving on to do empirical analysis of many different technologies to gauge how effective the model is in practice."We're doing a lot of work on analyzing large data sets" on different products and processes, she says.

For now, she suggests, the method is most useful for comparing two different technologies"whose components are similar, but whose design complexity is different." For example, the analysis could be used to compare different approaches to next-generation solar photovoltaic cells, she says. The method can also be applied to processes, such as improving the design of supply chains or infrastructure systems."It can be applied at many different scales," she says.

Koen Frenken, professor of economics of innovation and technological change at Eindhoven University of Technology in the Netherlands, says this paper"provides a long-awaited theory" for the well-known phenomenon of learning curves."It has remained a puzzle why the rates at which humans learn differ so markedly among technologies. This paper provides an explanation by looking at the complexity of technology, using a clever way to model design complexity."

Frenken adds,"The paper opens up new avenues for research. For example, one can verify their theory experimentally by having human subjects solve problems with different degrees of complexity." In addition, he says,"The implications for firms and policymakers {are} that R&D should not only be spent on invention of new technologies, but also on simplifying existing technologies so that humans will learn faster how to improve these technologies."

Ultimately, the kind of analysis developed in this paper could become part of the design process -- allowing engineers to"design for rapid innovation," Trancik says, by using these principles to determine"how you set up the architecture of your system."


Source

Tuesday, May 17, 2011

Physicist Accelerates Simulations of Thin Film Growth

Jacques Amar, Ph.D., professor of physics at the University of Toledo (UT), studies the modeling and growth of materials at the atomic level. He uses Ohio Supercomputer Center (OSC) resources and Kinetic Monte Carlo (KMC) methods to simulate the molecular beam epitaxy (MBE) process, where metals are heated until they transition into a gaseous state and then reform as thin films by condensing on a wafer in single-crystal thick layers.

"One of the main advantages of MBE is the ability to control the deposition of thin films and atomic structures on the atomic scale in order to create nanostructures," explained Amar.

Thin films are used in industry to create a variety of products, such as semiconductors, optical coatings, pharmaceuticals and solar cells.

"Ohio's status as a worldwide manufacturing leader has led OSC to focus on the field of advanced materials as one of our areas of primary support," noted Ashok Krishnamurthy, co-interim co-executive director of the center."As a result, numerous respected physicists, chemists and engineers, such as Dr. Amar, have accessed OSC computation and storage resources to advance their vital materials science research."

Recently, Amar leveraged the center's powerful supercomputers to implement a"first-passage time approach" to speed up KMC simulations of the creation of materials just a few atoms thick.

"The KMC method has been successfully used to carry out simulations of a wide variety of dynamical processes over experimentally relevant time and length scales," Amar noted."However, in some cases, much of the simulation time can be 'wasted' on rapid, repetitive, low-barrier events."

While a variety of approaches to dealing with the inefficiencies have been suggested, Amar settled on using a first-passage-time (FPT) approach to improve KMC processing speeds. FPT, sometimes also called first-hitting-time, is a statistical model that sets a certain threshold for a process and then estimates certain factors, such as the probability that the process reaches that threshold within a certain amount time or the mean time until which the threshold is reached.

"In this approach, one avoids simulating the numerous diffusive hops of atoms, and instead replaces them with the first-passage time to make a transition from one location to another," Amar said.

In particular, Amar and colleagues from the UT department of Physics and Astronomy targeted two atomic-level events for testing the FPT approach: edge-diffusion and corner rounding. Edge-diffusion involves the"hopping" movement of surface atoms -- called adatoms -- along the edges of islands, which are formed as the material is growing. Corner rounding involves the hopping of adatoms around island corners, leading to smoother islands.

Amar compared the KMC-FPT and regular KMC simulation approaches using several different models of thin film growth: Cu/Cu(100), fcc(100) and solid-on-solid (SOS). Additionally, he employed two different methods for calculating the FPT for these events: the mean FPT (MFPT), as well as the full FPT distribution.

"Both methods provided"very good agreement" between the FPT-KMC approach and regular KMC simulations," Amar concluded."In addition, we find that our FPT approach can lead to a significant speed-up, compared to regular KMC simulations."

Amar's FPT-KMC approach accelerated simulations by a factor of approximately 63 to 100 times faster than the corresponding KMC simulations for the fcc(100) model. The SOS model was improved by a factor of 36 to 76 times faster. For the Cu/Cu(100) tests, speed-up factors of 31 to 42 and 22 to 28 times faster were achieved, respectively, for simulations using the full FPT distribution and MFPT calculations.

Amar's research was supported through multiple grants from the National Science Foundation, as well as by a grant of computer time from OSC.


Source

Monday, May 16, 2011

Beyond Smart Phones: Sensor Network to Make 'Smart Cities' Envisioned

Computer scientists, electrical and computer engineers, and mathemati­cians at the TU Darmstadt and the University of Kassel have joined forces and are working on implementing that vision under their"Cocoon" project. The backbone of a"smart" city is a communications network consisting of sen­sors that receive streams of data, or signals, analyze them, and trans­mit them onward. Such sensors thus act as both receivers and trans­mit­ters, i.e., represent trans­ceivers. The networked communications involved oper­ates wire­lessly via radio links, and yields added values to all partici­pants by analyzing the input data involved. For example, the"Smart Home" control system already on the market allows networking all sorts of devices and automatically regulating them to suit demands, thereby alleg­edly yielding energy savings of as much as fifteen percent.

"Smart Home" might soon be followed by"Smart Hospital,""Smart Indus­try," or"Smart Farm," and even"smart" systems tailored to suit mobile net­works are feasible. Traffic jams may be avoided by, for example, car-to-car or car-to-environment (car-to-X) communications. Health-service sys­tems might also benefit from mobile, sensor communications whenever patients need to be kept supplied with information tailored to suit their health­care needs while underway. Furthermore, sensors on their bodies could assess the status of their health and automatically transmit calls for emergency medical assistance, whenever necessary.

"Smart" and mobile, thanks to beam forming

The researchers regard the ceaseless travels of sensors on mobile systems and their frequent entries into/exits from instrumented areas as the major hurdle to be overcome in implementing their vision of"smart" cities. Sensor-aided devices will have to deal with that by responding to subtle changes in their environments and flexibly, efficiently, regulating the quali­ties of received and transmitted signals. Beam forming, a field in which the TU Darmstadt's Institute for Communications Technology is active, should help out there. On that subject, Prof. Rolf Jakoby of the TU Darmstadt's Electrical Engineering and Information Technology Dept. remarked that,"Current types of antennae radiate omnidirectionally, like light bulbs. We intend to create conditions, under which antennae will, in the future, behave like spotlights that, once they have located a sought device, will track it, while suppressing interference by stray electromag­netic radiation from other devices that might also be present in the area."

Such antennae, along with transceivers equipped with them, are thus recon­figurable, i.e., adjustable to suit ambient conditions by means of onboard electronic circuitry or remote controls. Working in col­lab­or­a­tion with an industrial partner, Jakoby has already equipped terres­trial digital-television (TDTV) transmitters with reconfigurable amplifiers that allow amplifying transmitted-signal levels by as much as ten percent. He added that,"If all of Germany's TDTV‑transmitters were equipped with such amp­li­fiers, we could shut down one nuclear power plant."

Frequency bands are a scarce resource

Reconfigurable devices also make much more efficient use of a scarce resource, freq­uency bands. Users have thus far been allocated rigorously defined frequency bands, where only fifteen to twenty percent of the capacities of even the more popular ones have been allocated. Beam forming might allow making more efficient use of them. Jakoby noted that,"This is an area that we are still taking a close look at, but we are well along the way toward understand­ing the system better." However, only a few uses of beam forming have emerged to date, since currently available systems are too expensive for mass applications.

Small, model networks are targeted

Yet another fundamental problem remains to be solved before"smart" cities may become realities. Sensor communications requires the cooper­a­tion of all devices involved, across all communications protocols, such as"Bluetooth," and across all networks, such as the European Global System for Mobile Communications (GSM) mobile-telephone network or wireless local-area networks (WLAN), which cannot be achieved with current devices, communications protocols, and networks. Jakoby explained that,"Con­verting all devices to a common communications protocol is infeas­ible, which is why we are seeking a new protocol that would be superim­posed upon everything and allow them to communicate via several proto­cols." Transmission channels would also have to be capable of handling a mas­sive flood of data, since, as Prof. Abdelhak Zoubir of the TU Darm­stadt's Electrical Engineer­ing and Information Technology Dept., the"Cocoon" project's coordinator, put it,"A"smart" Darm­stadt alone would surely involve a million sensors communicating with one another via satel­lites, mobile telephones, computers, and all of the other types of devices that we already have available. Furthermore, since a single, mobile sensor is readily capable of generating several hundred Meg­a­bytes of data annu­ally, new models for handling the communications of millions of such sen­sors that will more densely compress data in order to provide for error-free com­munica­tions will be needed. Several hurdles will thus have to be over­come before"smart" cities become reality. Nevertheless, the scientists working on the"Cocoon" project are convinced that they will be able to simulate a"smart" city incorporating various types of devices employing early versions of small, model networks.

Over the next three years, scientists at the TU Darmstadt will be receiving a total of 4.5 million Euros from the State of Hesse's Offensive for Devel­op­ing Scientific-Economic Excellence for their researches in conjunction with their"Cocoon -- Cooperative Sensor Communications" project.


Source

Friday, May 13, 2011

New Algorithm Offers Ability to Influence Systems Such as Living Cells or Social Networks

However, an MIT researcher has come up with a new computational model that can analyze any type of complex network -- biological, social or electronic -- and reveal the critical points that can be used to control the entire system.

Potential applications of this work, which appears as the cover story in the May 12 issue ofNature, include reprogramming adult cells and identifying new drug targets, says study author Jean-Jacques Slotine, an MIT professor of mechanical engineering and brain and cognitive sciences.

Slotine and his co-authors applied their model to dozens of real-life networks, including cell-phone networks, social networks, the networks that control gene expression in cells and the neuronal network of the C. elegans worm. For each, they calculated the percentage of points that need to be controlled in order to gain control of the entire system.

For sparse networks such as gene regulatory networks, they found the number is high, around 80 percent. For dense networks -- such as neuronal networks -- it's more like 10 percent.

The paper, a collaboration with Albert-Laszlo Barabasi and Yang-Yu Liu of Northeastern University, builds on more than half a century of research in the field of control theory.

Control theory -- the study of how to govern the behavior of dynamic systems -- has guided the development of airplanes, robots, cars and electronics. The principles of control theory allow engineers to design feedback loops that monitor input and output of a system and adjust accordingly. One example is the cruise control system in a car.

However, while commonly used in engineering, control theory has been applied only intermittently to complex, self-assembling networks such as living cells or the Internet, Slotine says. Control research on large networks has been concerned mostly with questions of synchronization, he says.

In the past 10 years, researchers have learned a great deal about the organization of such networks, in particular their topology -- the patterns of connections between different points, or nodes, in the network. Slotine and his colleagues applied traditional control theory to these recent advances, devising a new model for controlling complex, self-assembling networks.

"The area of control of networks is a very important one, and although much work has been done in this area, there are a number of open problems of outstanding practical significance," says Adilson Motter, associate professor of physics at Northwestern University. The biggest contribution of the paper by Slotine and his colleagues is to identify the type of nodes that need to be targeted in order to control complex networks, says Motter, who was not involved with this research.

The researchers started by devising a new computer algorithm to determine how many nodes in a particular network need to be controlled in order to gain control of the entire network. (Examples of nodes include members of a social network, or single neurons in the brain.)

"The obvious answer is to put input to all of the nodes of the network, and you can, but that's a silly answer," Slotine says."The question is how to find a much smaller set of nodes that allows you to do that."

There are other algorithms that can answer this question, but most of them take far too long -- years, even. The new algorithm quickly tells you both how many points need to be controlled, and where those points -- known as"driver nodes" -- are located.

Next, the researchers figured out what determines the number of driver nodes, which is unique to each network. They found that the number depends on a property called"degree distribution," which describes the number of connections per node.

A higher average degree (meaning the points are densely connected) means fewer nodes are needed to control the entire network. Sparse networks, which have fewer connections, are more difficult to control, as are networks where the node degrees are highly variable.

In future work, Slotine and his collaborators plan to delve further into biological networks, such as those governing metabolism. Figuring out how bacterial metabolic networks are controlled could help biologists identify new targets for antibiotics by determining which points in the network are the most vulnerable.


Source

Wednesday, May 11, 2011

Razing Seattle's Viaduct Doesn’t Guarantee Nightmare Commutes, Model Says

University of Washington statisticians have, for the first time, explored a different subject of uncertainty, namely surrounding how much commuters might benefit from the project. They found that relying on surface streets would likely have less impact on travel times than previously reported, and that different options' effects on commute times are not well known.

The research, conducted in 2009, was originally intended as an academic exercise looking at how to assess uncertainties in travel-time projections from urban transportation and land-use models. But the paper is being published amid renewed debate about the future of Seattle's waterfront thoroughfare.

"In early 2009 it was decided there would be a tunnel, and we said, 'Well, the issue is settled but it's still of academic interest,'" said co-author Adrian Raftery, a UW statistics professor."Now it has all bubbled up again."

The study was cited last month in a report by the Seattle Department of Transportation reviewing the tunnel's impact. It is now available online, and will be published in an upcoming issue of the journalTransportation Research: Part A.

The UW authors considered 22 commuter routes, eight of which currently include the viaduct. They compared a business-as-usual scenario, where a new elevated highway or a tunnel carries all existing traffic, against a worst-case scenario in which the viaduct is removed and no measures are taken to increase public transportation or otherwise mitigate the effects.

The study found that simply erasing the structure in 2010 would increase travel times a decade later for the eight routes that currently include the viaduct by 1.5 minutes to 9.2 minutes, with an average increase of 6 minutes. The uncertainty was fairly large, with zero change within the 95 percent confidence range for all the viaduct routes, and more than 20 minutes increase as a reasonable projection in a few cases. In the short term some routes along Interstate 5 were slightly slower, but by 2020 the travel times returned to today's levels.

"This indicates that over time removing the structure would increase commute times for people who use the viaduct by about six minutes, although there's quite a bit of uncertainty about exactly how much," Raftery said."In the rest of the region, on I-5, there's no indication that it would increase commute times at all."

The Washington State Department of Transportation had used a computer model in 2008 to explore travel times under various project scenarios. It found that the peak morning commute across downtown would be 10 minutes longer if the state relied on surface transportation. Shortly thereafter state and city leaders decided to build a tunnel.

The UW team in late 2009 ran the same travel model but added an urban land-use component that allows people and businesses to adapt over time -- for instance by moving, switching jobs or relocating businesses. It also included a statistical method that puts error bars around the travel-time projections.

"There is a big interest among transportation planners in putting an uncertainty range around modeling results," said co-author Hana Sevcikova, a UW research scientist who ran the model.

"Often in policy discussions there's interest in either one end or the other of an interval: How bad could things be if we don't make an investment, or if we do make an investment, are we sure that it's necessary?" Raftery said."The ends of the interval can give you a sense of that."

The UW study used a method called Bayesian statistics to combine computer models with actual data. Researchers used 2000 and 2005 land-use data and 2005 commute travel times to fine-tune the model. Bayesian statistics improves the model's accuracy and provides an uncertainty range around the model's projections.

The study used UrbanSim, an urban simulation model developed by co-author and former UW faculty member Paul Waddell, now a professor at the University of California, Berkeley. The model starts running in the year 2000, the viaduct is taken down in 2010 and the study focuses on peak morning commutes in the year 2020.

Despite renewed discussion, the authors are not taking a position on the debate.

"This is a scientific assessment. People could well say that six minutes is a lot, and it's worth whatever it takes {to avoid it}," Raftery said."To some extent it comes down to a value judgment, factoring in the economic and environmental impacts."


Source

Sunday, May 8, 2011

Evolutionary Lessons for Wind Farm Efficiency

Senior Lecturer Dr Frank Neumann, from the School of Computer Science, is using a"selection of the fittest" step-by-step approach called"evolutionary algorithms" to optimise wind turbine placement. This takes into account wake effects, the minimum amount of land needed, wind factors and the complex aerodynamics of wind turbines.

"Renewable energy is playing an increasing role in the supply of energy worldwide and will help mitigate climate change," says Dr Neumann."To further increase the productivity of wind farms, we need to exploit methods that help to optimise their performance."

Dr Neumann says the question of exactly where wind turbines should be placed to gain maximum efficiency is highly complex."An evolutionary algorithm is a mathematical process where potential solutions keep being improved a step at a time until the optimum is reached," he says.

"You can think of it like parents producing a number of offspring, each with differing characteristics," he says."As with evolution, each population or 'set of solutions' from a new generation should get better. These solutions can be evaluated in parallel to speed up the computation."

Other biology-inspired algorithms to solve complex problems are based on ant colonies.

"Ant colony optimisation" uses the principle of ants finding the shortest way to a source of food from their nest.

"You can observe them in nature, they do it very efficiently communicating between each other using pheromone trails," says Dr Neumann."After a certain amount of time, they will have found the best route to the food -- problem solved. We can also solve human problems using the same principles through computer algorithms."

Dr Neumann has come to the University of Adelaide this year from Germany where he worked at the Max Planck Institute. He is working on wind turbine placement optimisation in collaboration with researchers at the Massachusetts Institute of Technology.

"Current approaches to solving this placement optimisation can only deal with a small number of turbines," Dr Neumann says."We have demonstrated an accurate and efficient algorithm for as many as 1000 turbines."

The researchers are now looking to fine-tune the algorithms even further using different models of wake effect and complex aerodynamic factors.


Source

Saturday, May 7, 2011

EEG Headset With Flying Harness Lets Users 'Fly' by Controlling Their Thoughts

Creative director and Rensselaer MFA candidate Yehuda Duenyas describes the"Infinity Simulator" as a platform similar to a gaming console -- like the Wii or the Kinect -- writ large.

"Instead of you sitting and controlling gaming content, it's a whole system that can control live elements -- so you can control 3-D rigging, sound, lights, and video," said Duenyas, who works under the moniker"xxxy.""It's a system for creating hybrids of theater, installation, game, and ride."

Duenyas created the"Infinity Simulator" with a team of collaborators, including Michael Todd, a Rensselaer 2010 graduate in computer science. Duenyas will exhibit the new system in the art installation"The Ascent" on May 12 at Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC).

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd.

Within the theater, the rigging -- including the harness -- is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The"Infinity Simulator," a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

"We've built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it," said Duenyas."The 'Infinity Simulator' is the center; everything talks to the 'Infinity Simulator.'"

The May 12"The Ascent" installation is only one experience made possible by the new platform, Duenyas said.

"'The Ascent' embodies the maiden experience that we'll be presenting," Duenyas said."But we've found that it's a versatile platform to create almost any type of experience that involves rigging, video, sound, and light. The idea is that it's reactive to the users' body; there's a physical interaction."

Duenyas, a Brooklyn-based artist and theater director, specializes in experiential theater performances.

"The thing that I focus on the most is user experience," Duenyas said."All the shows I do with my theater company and on my own involve a lot of set and set design -- you're entering into a whole world. You're having an experience that is more than going to a show, although a show is part of it."

The"Infinity Simulator" stemmed from an idea Duenyas had for such a theatrical experience.

"It started with an idea that I wanted to create a simulator that would give people a feeling of infinity," Duenyas said. His initial vision was that of a room similar to a Cave Automated Virtual Environment -- a room paneled with projection screens -- in which participants would be able to float effortlessly in an environment intended to evoke a glimpse into infinity.

At Rensselaer, Duenyas took advantage of the technology at hand to explore his idea, first with a video game he developed in 2010, then -- working through the Department of the Arts -- with EMPAC's computer-controlled 3-D theatrical flying harness.

"The charge of the arts department is to allow the artists that they bring into the department to use technology to enhance what they've been doing already," Duenyas said."In coming here (EMPAC), and starting to translate our ideas into a physical space, so many different things started opening themselves up to us."

The 2010 video game, also developed with Todd, tracked the movements -- pitch and yaw -- of players suspended in a custom-rigged harness, allowing players to soar through simulated landscapes. Duenyas said that that game (also called the"Infinity Simulator") and the new platform are part of the same vision.

EMPAC Director Johannes Goebel saw the game on display at the 2010 GameFest and discussed the custom-designed 3-D theatrical flying rig in EMPAC with Duenyas. Working through the Arts Department, Duenyas submitted a proposal to work with the rig, and his proposal was accepted.

Duenyas and his team experimented -- first gaining peripheral control over the system, and then linking it to the EEG headset -- and created the Ascent installation as an initial project. In the installation, the Infinity Simulator is programmed to respond to relaxation.

"We're measuring two brain states -- alpha and theta -- waking consciousness and everyday brain computational processing," said Duenyas."If you close your eyes and take a deep breath, that processing power decreases. When it decreases below a certain threshold, that is the trigger for you to elevate."

As a user rises, their ascent triggers a changing display of lights, sound, and video. Duenyas said he wants to hint at transcendental experience, while keeping the door open for a more circumspect interpretation.

"The point is that the user is trying to transcend the everyday and get into this meditative state so they can have this experience. I see it as some sort of iconic spiritual simulator. That's the serious side," he said."There's also a real tongue-in-cheek side of my work: I want clouds, I want Terry Gilliam's animated fist to pop out of a cloud and hit you in the face. It's mixing serious religious symbology, but not taking it seriously."

The humor is prompted, in part, by the limitations of this earliest iteration of Duenyas' vision.

"It started with, 'I want to have a glimpse of infinity,' 'I want to float in space.' Then you get in the harness and you're like 'man, this harness is uncomfortable,'" he said."In order to achieve the original vision, we had to build an infrastructure, and I still see development of the infinity experience is a ways off; but what we can do with the infrastructure in a realistic time frame is create 'The Ascent,' which is going to be really fun, and totally other."

Creating the"Infinity Simulator" has prompted new possibilities.

"The vision now is to play with this fun system that we can use to build any experience," he said."It's sort of overwhelming because you could do so many things -- you could create a flight through cumulus clouds, you could create an augmented physicality parkour course where you set up different features in the room and guide yourself to different heights. It's limitless."


Source

Friday, May 6, 2011

Scientists Afflict Computers With 'Schizophrenia' to Better Understand the Human Brain

The researchers used a virtual computer model, or"neural network," to simulate the excessive release of dopamine in the brain. They found that the network recalled memories in a distinctly schizophrenic-like fashion.

Their results were published in April inBiological Psychiatry.

"The hypothesis is that dopamine encodes the importance-the salience-of experience," says Uli Grasemann, a graduate student in the Department of Computer Science at The University of Texas at Austin."When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from."

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what's meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren't real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

The neural network used by Grasemann and his adviser, Professor Risto Miikkulainen, is called DISCERN. Designed by Miikkulainen, DISCERN is able to learn natural language. In this study it was used to simulate what happens to language as the result of eight different types of neurological dysfunction. The results of the simulations were compared by Ralph Hoffman, professor of psychiatry at the Yale School of Medicine, to what he saw when studying human schizophrenics.

In order to model the process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information-not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann."Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate-essentially telling it to stop forgetting so much.

"It's an important mechanism to be able to ignore things," says Grasemann."What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia."

After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of"derailment"-replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

"Information processing in neural networks tends to be like information processing in the human brain in many ways," says Grasemann."So the hope was that it would also break down in similar ways. And it did."

The parallel between their modified neural network and human schizophrenia isn't absolute proof the hyperlearning hypothesis is correct, says Grasemann. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.

"We have so much more control over neural networks than we could ever have over human subjects," he says."The hope is that this kind of modeling will help clinical research."


Source

Thursday, May 5, 2011

Robot Engages Novice Computer Scientists

A product of CMU's famed Robotics Institute, Finch was designed specifically to make introductory computer science classes an engaging experience once again.

A white plastic, two-wheeled robot with bird-like features, Finch can quickly be programmed by a novice to say"Hello, World," or do a little dance, or make its beak glow blue in response to cold temperature or some other stimulus. But the simple look of the tabletop robot is deceptive. Based on four years of educational research sponsored by the National Science Foundation, Finch includes a number of features that could keep students busy for a semester or more thinking up new things to do with it.

"Students are more interested and more motivated when they can work with something interactive and create programs that operate in the real world," said Tom Lauwers, who earned his Ph.D. in robotics at CMU in 2010 and is now an instructor in the Robotics Institute's CREATE Lab."We packed Finch with sensors and mechanisms that engage the eyes, the ears -- as many senses as possible."

Lauwers has launched a startup company, BirdBrain Technologies, to produce Finch and now sells them online atwww.finchrobot.comfor$99 each.

"Our vision is to make Finch affordable enough that every student can have one to take home for assignments," said Lauwers, who developed the robot with Illah Nourbakhsh, associate professor of robotics and director of the CREATE Lab. Less than a foot long, Finch easily fits in a backpack and is rugged enough to survive being hauled around and occasionally dropped.

Finch includes temperature and light sensors, a three-axis accelerometer and a bump sensor. It has color-programmable LED lights, a beeper and speakers. With a pencil inserted in its tail, Finch can be used to draw pictures. It can be programmed to be a moving, noise-making alarm clock. It even has uses beyond a robot; its accelerometer enables it to be used as a 3-D mouse to control a computer display.

Robot kits suitable for students as young as 12 are commercially available, but often cost more than the Finch, Lauwers said. What's more, the idea is to use the robot to make computer programming lessons more interesting, not to use precious instructional time to first build a robot.

Finch is a plug-and-play device, so no drivers or other software must be installed beyond what is used in typical computer science courses. Finch connects with and receives power from the computer over a 15-foot USB cable, eliminating batteries and off-loading its computation to the computer. Support for a wide range of programming languages and environments is coming, including graphical languages appropriate for young students. Finch currently can be programmed with the Java and Python languages widely used by educators.

A number of assignments are available on the Finch Robot website to help teachers drop Finch into their lesson plans, and the website allows instructors to upload their own assignments or ideas in return for company-provided incentives. The robot has been classroom-tested at the Community College of Allegheny County, Pa., and by instructors in high school, university and after-school programs.

"Computer science now touches virtually every scientific discipline and is a critical part of most new technologies, yet U.S. universities saw declining enrollments in computer science through most of the past decade," Nourbakhsh said."If Finch can help motivate students to give computer science a try, we think many more students will realize that this is a field that they would enjoy exploring."


Source

Wednesday, May 4, 2011

Revolutionary New Paper Computer Shows Flexible Future for Smartphones and Tablets

"This is the future. Everything is going to look and feel like this within five years," says creator Roel Vertegaal, the director of Queen's University Human Media Lab."This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen."

The smartphone prototype, called PaperPhone is best described as a flexible iPhone -- it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

Dr. Vertegaal will unveil his paper computer on May 10 at 2 pm at the Association of Computing Machinery's CHI 2011 (Computer Human Interaction) conference in Vancouver -- the premier international conference of Human-Computer Interaction.

Being able to store and interact with documents on larger versions of these light, flexible computers means offices will no longer require paper or printers.

"The paperless office is here. Everything can be stored digitally and you can place these computers on top of each other just like a stack of paper, or throw them around the desk" says Dr. Vertegaal.

The invention heralds a new generation of computers that are super lightweight, thin-film and flexible. They use no power when nobody is interacting with them. When users are reading, they don't feel like they're holding a sheet of glass or metal.

An article on a study of interactive use of bending with flexible thinfilm computers is to be published at the conference in Vancouver, where the group is also demonstrating a thinfilm wristband computer called Snaplet.

The development team included researchers Byron Lahey and Win Burleson of the Motivational Environments Research Group at Arizona State University (ASU), Audrey Girouard and Aneesh Tarun from the Human Media Lab at Queen's University, Jann Kaminski and Nick Colaneri, director of ASU's Flexible Display Center, and Seth Bishop and Michael McCreary, the VP R&D of E Ink Corporation.

For more information, articles, videos, and high resolution photos, visithttp://www.humanmedialab.org/paperphone/andhttp://www.youtube.com/watch?v=Rl-qygUEE2c


Source

Monday, May 2, 2011

College Students' Use of Kindle DX Points to E-Reader’s Role in Academia

The UW last year was one of seven U.S. universities that participated in a pilot study of the Kindle DX, a larger version of the popular e-reader. UW researchers who study technology looked at how students involved in the pilot project did their academic reading.

"There is no e-reader that supports what we found these students doing," said first author Alex Thayer, a UW doctoral student in Human Centered Design and Engineering."It remains to be seen how to design one. It's a great space to get into, there's a lot of opportunity."

Thayer is presenting the findings in Vancouver, B.C. at the Association for Computing Machinery's Conference on Human Factors in Computing Systems, where the study received an honorable mention for best paper.

"Most e-readers were designed for leisure reading -- think romance novels on the beach," said co-author Charlotte Lee, a UW assistant professor of Human Centered Design and Engineering."We found that reading is just a small part of what students are doing. And when we realize how dynamic and complicated a process this is, it kind of redefines what it means to design an e-reader."

Some of the other schools participating in the pilot project conducted shorter studies, generally looking at the e-reader's potential benefits and drawbacks for course use. The UW study looked more broadly at how students did their academic reading, following both those who incorporated the e-reader into their routines and those who did not.

"We were not trying to evaluate the device, per se, but wanted to think long term, really looking to the future of e-readers, what are students trying to do, how can we support that," Lee said.

The researchers interviewed 39 first-year graduate students in the UW's Department of Computer Science& Engineering, 7 women and 32 men, ranging from 21 to 53 years old.

By spring quarter of 2010, seven months into the study, less than 40 percent of the students were regularly doing their academic reading on the Kindle DX. Reasons included the device's lack of support for taking notes and difficulty in looking up references. (Amazon Corp., which makes the Kindle DX, has since improved some of these features.)

UW researchers continued to interview all the students over the nine-month period to find out more about their reading habits, with or without the e-reader. They found:

  • Students did most of the reading in fixed locations: 47 percent of reading was at home, 25 percent at school, 17 percent on a bus and 11 percent in a coffee shop or office.
  • The Kindle DX was more likely to replace students' paper-based reading than their computer-based reading.
  • Of the students who continued to use the device, some read near a computer so they could look up references or do other tasks that were easier to do on a computer. Others tucked a sheet of paper into the case so they could write notes.
  • With paper, three quarters of students marked up texts as they read. This included highlighting key passages, underlining, drawing pictures and writing notes in margins.
  • A drawback of the Kindle DX was the difficulty of switching between reading techniques, such as skimming an article's illustrations or references just before reading the complete text. Students frequently made such switches as they read course material.
  • The digital text also disrupted a technique called cognitive mapping, in which readers used physical cues such as the location on the page and the position in the book to go back and find a section of text or even to help retain and recall the information they had read.

Lee predicts that over time software will help address some of these issues. She even envisions niche software that could support reading styles specific to certain disciplines.

"You can imagine that a historian going through illuminated texts is going to have very different navigation needs than someone who is comparing algorithms," Lee said.

It's likely that desktop computers, laptops, tablet computers and yes, even paper, will play a role in academic reading's future. But the authors say e-readers will also find their place. Thayer imagines the situation will be similar to today's music industry, where mp3s, CDs and LPs all coexist in music-lovers' listening habits.

"E-readers are not where they need to be in order to support academic reading," Lee concludes. But asked when e-readers will reach that point, she predicts:"It's going to be sooner than we think."

Other co-authors are Linda Hwang, Heidi Sales, Pausali Sen and Ninad Dalal of the UW.


Source

Friday, April 29, 2011

Good Eggs: Nanomagnets Offer Food for Thought About Computer Memories

For a study described in a new paper, NIST researchers used electron-beam lithography to make thousands of nickel-iron magnets, each about 200 nanometers (billionths of a meter) in diameter. Each magnet is ordinarily shaped like an ellipse, a slightly flattened circle. Researchers also made some magnets in three different egglike shapes with an increasingly pointy end. It's all part of NIST research on nanoscale magnetic materials, devices and measurement methods to support development of future magnetic data storage systems.

It turns out that even small distortions in magnet shape can lead to significant changes in magnetic properties. Researchers discovered this by probing the magnets with a laser and analyzing what happens to the"spins" of the electrons, a quantum property that's responsible for magnetic orientation. Changes in the spin orientation can propagate through the magnet like waves at different frequencies. The more egg-like the magnet, the more complex the wave patterns and their related frequencies. (Something similar happens when you toss a pebble in an asymmetrically shaped pond.) The shifts are most pronounced at the ends of the magnets.

To confirm localized magnetic effects and"color" the eggs, scientists made simulations of various magnets using NIST's object-oriented micromagnetic framework (OOMMF). Lighter colors indicate stronger frequency signals.

The egg effects explain erratic behavior observed in large arrays of nanomagnets, which may be imperfectly shaped by the lithography process. Such distortions can affect switching in magnetic devices. The egg study results may be useful in developing random-access memories (RAM) based on interactions between electron spins and magnetized surfaces. Spin-RAM is one approach to making future memories that could provide high-speed access to data while reducing processor power needs by storing data permanently in ever-smaller devices. Shaping magnets like eggs breaks up a symmetric frequency pattern found in ellipse structures and thus offers an opportunity to customize and control the switching process.

"For example, intentional patterning of egg-like distortions into spinRAM memory elements may facilitate more reliable switching," says NIST physicist Tom Silva, an author of the new paper.

"Also, this study has provided the Easter Bunny with an entirely new market for product development."


Source

Friday, April 22, 2011

'Time Machine' Made to Visually Explore Space and Time in Videos: Time-Lapse GigaPans Provide New Way to Access Big Data

Viewers, for instance, can use the system to focus in on the details of a booth within a panorama of a carnival midway, but also reverse time to see how the booth was constructed. Or they can watch a group of plants sprout, grow and flower, shifting perspective to watch some plants move wildly as they grow while others get eaten by caterpillars. Or, they can view a computer simulation of the early universe, watching as gravity works across 600 million light-years to condense matter into filaments and finally into stars that can be seen by zooming in for a close up.

"With GigaPan Time Machine, you can simultaneously explore space and time at extremely high resolutions," said Illah Nourbakhsh, associate professor of robotics and head of the CREATE Lab."Science has always been about narrowing your point of view -- selecting a particular experiment or observation that you think might provide insight. But this system enables what we call exhaustive science, capturing huge amounts of data that can then be explored in amazing ways."

The system is an extension of the GigaPan technology developed by the CREATE Lab and NASA, which can capture a mosaic of hundreds or thousands of digital pictures and stitch those frames into a panorama that be interactively explored via computer. To extend GigaPan into the time dimension, image mosaics are repeatedly captured at set intervals, and then stitched across both space and time to create a video in which each frame can be hundreds of millions, or even billions of pixels.

An enabling technology for time-lapse GigaPans is a feature of the HTML5 language that has been incorporated into such browsers as Google's Chrome and Apple's Safari. HTML5, the latest revision of the HyperText Markup Language (HTML) standard that is at the core of the Internet, makes browsers capable of presenting video content without use of plug-ins such as Adobe Flash or Quicktime.

Using HTML5, CREATE Lab computer scientists Randy Sargent, Chris Bartley and Paul Dille developed algorithms and software architecture that make it possible to shift seamlessly from one video portion to another as viewers zoom in and out of Time Machine imagery. To keep bandwidth manageable, the GigaPan site streams only those video fragments that pertain to the segment and/or time frame being viewed.

"We were crashing the browsers early on," Sargent recalled."We're really pushing the browser technology to the limits."

Guidelines on how individuals can capture time-lapse images using GigaPan cameras are included on the site created for hosting the new imagery's large data files,http://timemachine.gigapan.org. Sargent explained the CREATE Lab is eager to work with people who want to capture Time Machine imagery with GigaPan, or use the visualization technology for other applications.

Once a Time Machine GigaPan has been created, viewers can annotate and save their explorations of it in the form of video"Time Warps."

Though the time-lapse mode is an extension of the original GigaPan concept, scientists already are applying the visualization techniques to other types of Big Data. Carnegie Mellon's Bruce and Astrid McWilliams Center for Cosmology, for instance, has used it to visualize a simulation of the early universe performed at the Pittsburgh Supercomputing Center by Tiziana Di Matteo, associate professor of physics.

"Simulations are a huge bunch of numbers, ugly numbers," Di Matteo said."Visualizing even a portion of a simulation requires a huge amount of computing itself." Visualization of these large data sets is crucial to the science, however."Discoveries often come from just looking at it," she explained.

Rupert Croft, associate professor of physics, said cosmological simulations are so massive that only a segment can be visualized at a time using usual techniques. Yet whatever is happening within that segment is being affected by forces elsewhere in the simulation that cannot be readily accessed. By converting the entire simulation into a time-lapse GigaPan, however, Croft and his Ph.D. student, Yu Feng, were able to create an image that provided both the big picture of what was happening in the early universe and the ability to look in detail at any region of interest.

Using a conventional GigaPan camera, Janet Steven, an assistant professor of biology at Sweet Briar College in Virginia, has created time-lapse imagery of rapid-growing brassicas, known as Wisconsin Fast Plants."This is such an incredible tool for plant biology," she said."It gives you the advantage of observing individual plants, groups of plants and parts of plants, all at once."

Steven, who has received GigaPan training through the Fine Outreach for Science program, said time-lapse photography has long been used in biology, but the GigaPan technology makes it possible to observe a number of plants in detail without having separate cameras for each plant. Even as one plant is studied in detail, it's possible to also see what neighboring plants are doing and how that might affect the subject plant, she added.

Steven said creating time-lapse GigaPans of entire landscapes could be a powerful tool for studying seasonal change in plants and ecosystems, an area of increasing interest for understanding climate change. Time-lapse GigaPan imagery of biological experiments also could be an educational tool, allowing students to make independent observations and develop their own hypotheses.

Google Inc. supported development of GigaPan Time Machine.


Source

Thursday, April 21, 2011

CAPTCHAs With Chaos: Strong Protection for Weak Passwords

Researchers at the Max Planck Institute for the Physics of Complex Systems in Dresden have been inspired by the physics of critical phenomena in their attempts to significantly improve password protection. The researchers split a password into two sections. With the first, easy-to-memorize section they encrypt a CAPTCHA ("completely automated public Turing test to tell computers and humans apart") -- an image that computer programs per se have difficulty in deciphering. The researchers also make it more difficult for computers, whose task it is to automatically crack passwords, to read the passwords without authorization. They use images of a simulated physical system, which they additionally make unrecognizable with a chaotic process. These p-CAPTCHAs enable the Dresden physicists to achieve a high level of password protection, even though the user need only remember a weak password.

Computers sometimes use brute force. Hacking programs use so-called brute-force attacks to try out all possible character combinations to guess passwords. CAPTCHAs are therefore intended as an additional safeguard the input of which originates from a human being and not from a machine. They pose a task for the user which is simple enough for any human, yet very difficult for a program. Users must enter a distorted text which is displayed on the screen, for example. CAPTCHAs are increasingly being bypassed, however. Personal data of members of the"SchülerVZ" social network for school pupils have already been stolen in this way.

Researchers at the Max Planck Institute for the Physics of Complex Systems in Dresden have now developed a new type of password protection that is based on a combination of characters and a CAPTCHA. They also use mathematical methods from the physics of critical phenomena to protect the CAPTCHA from being accessed by computers."We thus make the password protection both more effective and simpler," says Konstantin Kladko, who had the idea for this interdisciplinary approach during his time at the Dresden Max Planck Institute; he is currently a researcher at Axioma Research in Palo Alto/USA.

The Dresden-based researchers initially combine password and CAPTCHA in a completely novel way. The CAPTCHA is no longer generated anew each time in order to distinguish the human user from a computer on a case-by-case basis. Rather, the physicists use the codeword in the image, which can only be deciphered by humans as the real password, which provides access to a social network or an online bank account, for example. The researchers additionally encrypt this password using a combination of characters.

However, that's not all: the CAPTCHA is a snapshot of a dynamic, chaotic Hamiltonian system in two dimensions. For the sake of simplicity, his image can be imagined as a grey-scale pixel matrix, where every pixel represents an oscillator. The oscillators are coupled in a network. Every oscillator oscillates between two states and is affected by the neighbouring oscillators as it does so, thus resulting in the grey scales.

Chaotic development makes password unreadable

The physicists then leave the system to develop chaotically for a period of time. The grey-scale matrix changes the colour of its pixels. The result is an image that no longer contains a recognizable word. The researchers subsequently encrypt this image with the combination of characters and save the result."We therefore talk of a password-protected CAPTCHA or p-CAPTCHA," says Sergej Flach, who teamed up with Tetyana Laptyeva to achieve the decisive research results at the Max Planck Institute for the Physics of Complex Systems. Since the chaotic evolution of the initial image is deterministic, i.e. reversible, the whole procedure can be reversed using the combination of characters, so that the user can again read the password hidden in the CAPTCHA.

"The character combination we use to encrypt the password in the CAPTCHA can be very easy to remember," explains Konstantin Kladko."We thus take account of the fact that most people only want to, or can only, remember simple passwords." The fact that the passwords are correspondingly weak is now no longer important, because the real protection comes from the encrypted password in the CAPTCHA.

On the one hand, the password hidden in the CAPTCHA is too long for computers to be able to guess it using a brute-force attack in a reasonable length of time. On the other, the physicists use a critical system to generate the password image. This system is close to a phase transition: with a phase transition, the system changes from one physical state to another, from the paramagnetic to the ferromagnetic state, for example. Close to the transition, regions repeatedly form which temporarily have already completed the transition."The resulting image is always very grainy. Therefore, a computer cannot distinguish it from the original it is searching for," explains Sergej Flach.

"Although the study has just been submitted to a specialist journal and is only available online in an archive, it has already provoked a large number of responses in the community -- and not only in Hacker News," says Sergej Flach."I was very impressed by the depth of some comments in certain forums -- in Slashdot, for example." The specialists are obviously impressed by the ingenuity of the approach, which means passwords could be very difficult to crack in the future. Moreover, the method is easy and quick to implement in conventional computer systems."An expansion to several p-CAPTCHA levels is obvious," says Sergej Flach. Hoiwever, this requires increased computing power to reverse the chaotic development in a reasonable time:"We therefore want to investigate various Hamiltonian and non-Hamiltonian systems in the future to see whether they provide faster and even more effective protection."


Source

Wednesday, April 20, 2011

Biophysicist Targeting IL-6 to Halt Breast, Prostate Cancer

Chenglong Li, Ph.D., an assistant professor of medicinal chemistry and pharmacognosy at The Ohio State University (OSU), is leveraging a powerful computer cluster at the Ohio Supercomputer Center (OSC) to develop a drug that will block the small protein molecule Interleukin-6 (IL-6). The body normally produces this immune-response messenger to combat infections, burns, traumatic injuries, etc. Scientists have found, however, that in people who have cancer, the body fails to turn off the response and overproduces IL-6.

"There is an inherent connection between inflammation and cancer," explained Li."In the case of breast cancers, a medical review systematically tabulated IL-6 levels in various categories of cancer patients, all showing that IL-6 levels elevated up to 40-fold, especially in later stages, metastatic cases and recurrent cases."

In 2002, Japanese researchers found that a natural, non-toxic molecule created by marine bacteria -- madindoline A (MDL-A) -- could be used to mildly suppress the IL-6 signal. Unfortunately, the researchers also found the molecule wouldn't bind strongly enough to be effective as a cancer drug and would be too difficult and expensive to synthesize commercially. And, most surprisingly, they found the bacteria soon mutated to produce a different, totally ineffectual compound. Around the same time, Stanford scientists were able to construct a static image of the crystal structure of IL-6 and two additional proteins.

Li recognized the potential of these initial insights and partnered last year with an organic chemist and a cancer biologist at OSU's James Cancer Hospital to further investigate, using an OSC supercomputer to construct malleable, three-dimensional color simulations of the protein complex.

"The proximity of two outstanding research organizations -- the James Cancer Hospital and OSC -- provide a potent enticement for top medical investigators, such as Dr. Li, to conduct their vital computational research programs at Ohio State University," said Ashok Krishnamurthy, interim co-executive director of OSC.

"We proposed using computational intelligence to re-engineer a new set of compounds that not only preserve the original properties, but also would be more potent and efficient," Li said."Our initial feasibility study pointed to compounds with a high potential to be developed into a non-toxic, orally available drug."

Li accessed 64 nodes of OSC's Glenn IBM 1350 Opteron cluster to simulate IL-6 and the two additional helper proteins needed to convey the signal: the receptor IL-6R and the common signal-transducing receptor GP130. Two full sets of the three proteins combine to form a six-sided molecular machine, or"hexamer," that transmits the signals that will, in time, cause cellular inflammation and, potentially, cancer.

Li employed the AMBER (Assisted Model Building with Energy Refinement) and AutoDock molecular modeling simulation software packages to help define the interactions between those proteins and the strength of their binding at five"hot spots" found in each half of the IL-6/IL-6R/GP130 hexamer.

By plugging small molecules, like MDL-A, into any of those hot spots, Li could block the hexamer from forming. So, he examined the binding strength of MDL-A at each of the hexamer hotspots, identifying most promising location, which turned out to be between IL-6 and the first segment, or modular domain (D1), of the GP130.

To design a derivative of MDL-A that would dock with D1 at that specific hot spot, Li used the CombiGlide screening program to search through more than 6,000 drug fragments. So far, he has identified two potential solutions by combining the"top" half of the MDL-A molecule with the"bottom" half of a benzyl molecule or a pyrazole molecule. These candidates preserve the important binding features of the MDL-A, while yielding molecules with strong molecular bindings that also are easier to synthesize than the original MDL-A.

"While we didn't promise to have a drug fully developed within the two years of the project, we're making excellent progress," said Li."The current research offers us an exciting new therapeutic paradigm: targeting tumor microenvironment and inhibiting tumor stem cell renewal, leading to a really effective way to overcome breast tumor drug resistance, inhibiting tumor metastasis and stopping tumor recurrence."

While not yet effective enough to be considered a viable drug, laboratory tests on tissue samples have verified the higher potency of the derivatives over the original MDL-A. Team members are preparing for more sophisticated testing in a lengthy and carefully monitored evaluation process.

Li's project is funded by a grant from the Department of Defense (CDMRP grant number BC095473) and supported by the award of an OSC Discovery Account. The largest funding areas of Congressionally Directed Medical Research Programs (CDMRP) are breast cancer, prostate cancer and ovarian cancer. Another Defense CDMRP grant involving Li supports a concurrent OSU investigation of the similar role that IL-6 plays in causing prostate cancer. Those projects are being conducted in collaboration with Li's Medicinal Chemistry colleague, Dr. James Fuchs, as well as Drs. Tushar Patel, Greg Lesinski and Don Benson at OSU's College of Medicine and James Cancer Hospital, and Dr. Jiayuh Lin at Nationwide Children's Hospital in Columbus.

"In addition to leading the center's user group this year, the number and depth of Dr. Li's computational chemistry projects have ranked him one of our most prolific research clients," Krishnamurthy noted.


Source

Tuesday, April 19, 2011

Clumsy Avatars: Perfection Versus Mortality in Games and Simulation

The shop is one of several projects Chang uses to explore humanity in technology. Chang, an electronic artist and recently appointed co-director of the Games and Simulation Arts and Sciences program at Rensselaer, sees the dialogue between perfection and mortality as an important influence in the growing world of games and simulation.

"There's this transcendence that technology promises us. At its extreme is the notion of immortality that -- with artificial intelligence, robotics, and virtual reality -- you could download your consciousness and take yourself out of the limitations of the physical body," said Chang."But at the same time, that's what makes us human: our frailty and our mortality."

In other words, while the"sell" behind technology is often about achieving perfection (with a smart phone all the answers are at hand, with GPS we never lose our way, in Second Life we are beautiful), the risk is a loss of humanity.

That dialogue and tension leads Chang to believe that the nascent world of gaming and simulation could become"a new cultural form" as great as literature, art, music, and theater.

"This is just the beginning; we don't really know what this is going to be, and 'games and simulation' is just the best term we have to describe a much larger form," said Chang."Twenty years ago nobody knew what the Web was going to be. There was this huge form on the horizon that we were sort of fumbling toward with different technological experiments, artistic experiments; I think this is what's going on with games and simulation right now.

"There are many things that are very difficult to do hands-on -- it's very difficult to simulate a disaster, it's very difficult to manipulate atoms and molecules at the atomic level -- and this is where simulation comes in handy," said Chang."That kind of learning experience, that way of gaining knowledge that's intuitive, that comes through experience and involvement, can be expanded to many other realms."

As an electronic artist, Chang's own work is at the intersection of virtual environments, experimental gaming, and contemporary media art.

"I'm interested in what you could call evocative and poetic experiences within technological systems -- creating that powerful experience that you can get from great music, theater, books, and paintings through immersive and interactive simulations as well," Chang said."But I'm also interested in the experiences of being human within technological systems."

Other recent projects include"Becoming," a computer-driven video installation in which the attributes of two animated figures -- each inhabiting their own space -- are interchanged."Over time, this causes each figure to take on the attributes of the other, distorted by the structure of their digital information."

In"Insecurity Camera," an installation shown at art exhibits around the country, a"shy" security camera turns away at the approach of subjects.

"What I'm interested in is getting at those human qualities that are still there," Chang said."Some of this has to do with frailty, with fumbling, weakness, and failure. These are things that can get disguised, they can get swept under the rug when we think about technology."

Chang earned a bachelor of arts in computer science from Amherst College, and a master of fine arts in art and technology studies from the Art Institute of Chicago. His installations, performances, and immersive virtual reality environments have been exhibited in numerous venues and festivals worldwide, including Boston CyberArts, SIGGRAPH, the FILE International Electronic Language Festival in Sao Paulo, the Athens MediaTerra Festival, the Wired NextFest, and the Vancouver New Forms Festival, among others. He has designed interactive exhibits for museums such as the Museum of Contemporary Art in Chicago and the Field Museum of Natural History.

Chang teaches a two-semester game development course that joins students with backgrounds in all aspects of games -- computer programming, computer science, design, art, and writing -- in the process of creating games. The students start with a design, and proceed through all the steps of planning, creating art work, writing code, and refining their game.

"Think of it as a foundation into developing games that you can take into experimental game design and stretch beyond it," Chang said.

As the"new cultural form" evolves, Chang sees ample room for exploration.

For example, said Chang, virtual reality, in which experiences are staged in a wholly digital world, leads to different implications than augmented reality, in which digital elements overlay the physical world. One implication of virtual reality -- in which, as in Second Life, users can experiment with their identity -- lies in research which suggests that personal growth gains made within the virtual world transfer to the real world. One implication of augmented reality -- in which users may add digital elements that only they can access -- is the possibility of several people sharing the same physical world while experiencing divergent realities.

In the near term, the most immediate implications for the emerging form are, as might be expected, in entertainment and education.

"What's already happening is this enrichment of the notion of what entertainment is through games," Chang said."When you talk about games, you often have ideas of simple first-person shooter or action games. But within the realm of entertainment is an immense diversity of possibilities -- from complex emotional dramatic story-based games to casual games on your cell phone. There's this range of ways of playing from competitive, multiplayer, social to creative. This is just within the entertainment realm."


Source

Sunday, April 17, 2011

Hydrocarbons Deep Within Earth: New Computational Study Reveals How

The thermodynamic and kinetic properties of hydrocarbons at high pressures and temperatures are important for understanding carbon reservoirs and fluxes in Earth.

The work provides a basis for understanding experiments that demonstrated polymerization of methane to form high hydrocarbons and earlier methane forming reactions under pressure.

Hydrocarbons (molecules composed of the elements hydrogen and carbon) are the main building block of crude oil and natural gas. Hydrocarbons contribute to the global carbon cycle (one of the most important cycles of Earth that allows for carbon to be recycled and reused throughout the biosphere and all of its organisms).

The team includes colleagues at UC Davis, Lawrence Livermore National Laboratory and Shell Projects& Technology. One of the researchers, UC Davis Professor Giulia Galli, is the co-chair of the Deep Carbon Observatory's Physics and Chemistry of Deep Carbon Directorate and former LLNL researcher.

Geologists and geochemists believe that nearly all (more than 99 percent) of the hydrocarbons in commercially produced crude oil and natural gas are formed by the decomposition of the remains of living organisms, which were buried under layers of sediments in Earth's crust, a region approximately 5-10 miles below Earth's surface.

But hydrocarbons of purely chemical deep crustal or mantle origin (abiogenic) could occur in some geologic settings, such as rifts or subduction zones said Galli, a senior author on the study.

"Our simulation study shows that methane molecules fuse to form larger hydrocarbon molecules when exposed to the very high temperatures and pressures of the Earth's upper mantle," Galli said."We don't say that higher hydrocarbons actually occur under the realistic 'dirty' Earth mantle conditions, but we say that the pressures and temperatures alone are right for it to happen.

Galli and colleagues used the Mako computer cluster in Berkeley and computers at Lawrence Livermore to simulate the behavior of carbon and hydrogen atoms at the enormous pressures and temperatures found 40 to 95 miles deep inside Earth. They used sophisticated techniques based on first principles and the computer software system Qbox, developed at UC Davis.

They found that hydrocarbons with multiple carbon atoms can form from methane, (a molecule with only one carbon and four hydrogen atoms) at temperatures greater than 1,500 K (2,240 degrees Fahrenheit) and pressures 50,000 times those at Earth's surface (conditions found about 70 miles below the surface).

"In the simulation, interactions with metal or carbon surfaces allowed the process to occur faster -- they act as 'catalysts,'" said UC Davis' Leonardo Spanu, the first author of the paper.

The research does not address whether hydrocarbons formed deep in Earth could migrate closer to the surface and contribute to oil or gas deposits. However, the study points to possible microscopic mechanisms of hydrocarbon formation under very high temperatures and pressures.

Galli's co-authors on the paper are Spanu; Davide Donadio at the Max Planck Institute in Meinz, Germany; Detlef Hohl at Shell Global Solutions, Houston; and Eric Schwegler of Lawrence Livermore National Laboratory.


Source

Thursday, April 14, 2011

Privacy Mode Helps Secure Android Smartphones

"There are a lot of concerns about potential leaks of personal information from smartphones," says Dr. Xuxian Jiang, an assistant professor of computer science at NC State and co-author of a paper describing the research."We have developed software that creates a privacy mode for Android systems, giving users flexible control over what personal information is available to various applications." The privacy software is called Taming Information-Stealing Smartphone Applications (TISSA).

TISSA works by creating a privacy setting manager that allows users to customize the level of information each smartphone application can access. Those settings can be adjusted any time that the relevant applications are being run -- not just when the applications are installed.

The TISSA prototype includes four possible privacy settings for each application. These settings are Trusted, Anonymized, Bogus and Empty. If an application is listed as Trusted, TISSA does not impose additional information access restrictions. If the user selects Anonymized, TISSA provides the application with generalized information that allows the application to run, without providing access to detailed personal information. The Bogus setting provides an application with fake results when it requests personal information. The Empty setting responds to information requests by saying the relevant information does not exist or is unavailable.

Jiang says TISSA could be easily modified to incorporate additional settings that would allow more fine-grained control of access to personal information."These settings may be further specialized for different types of information, such as your contact list or your location," Jiang says."The settings can also be specialized for different applications."

For example, a user may install a weather application that requires location data in order to provide the user with the local weather forecast. Rather than telling the application exactly where the user is, TISSA could be programmed to give the application generalized location data -- such as a random location within a 10-mile radius of the user. This would allow the weather application to provide the local weather forecast information, but would ensure that the application couldn't be used to track the user's movements.

The researchers are currently exploring how to make this software available to Android users."The software modification is relatively minor," Jiang says,"and could be incorporated through an over-the-air update."

The paper,"Taming Information-Stealing Smartphone Applications (on Android)," was co-authored by Jiang; Yajin Zhou, a Ph.D. student at NC State; Dr. Vincent Freeh, an associate professor of computer science at NC State; and Dr. Xinwen Zhang of Huawei America Research Center. The paper will be presented in June at the 4th International Conference on Trust and Trustworthy Computing, in Pittsburgh, Pa. The research was supported by the National Science Foundation and NC State's Secure Open Systems Initiative, which receives funding from the U.S. Army Research Office.


Source

Tuesday, April 12, 2011

Mapping the Brain: New Technique Poised to Untangle the Complexity of the Brain

A new area of research is emerging in the neuroscience known as 'connectomics'. With parallels to genomics, which maps the our genetic make-up, connectomics aims to map the brain's connections (known as 'synapses'). By mapping these connections -- and hence how information flows through the circuits of the brain -- scientists hope to understand how perceptions, sensations and thoughts are generated in the brain and how these functions go wrong in diseases such as Alzheimer's disease, schizophrenia and stroke.

Mapping the brain's connections is no trivial task, however: there are estimated to be one hundred billion nerve cells ('neurons') in the brain, each connected to thousands of other nerve cells -- making an estimated 150 trillion synapses. Dr Tom Mrsic-Flogel, a Wellcome Trust Research Career Development Fellow at UCL (University College London), has been leading a team of researchers trying to make sense of this complexity.

"How do we figure out how the brain's neural circuitry works?" he asks."We first need to understand the function of each neuron and find out to which other brain cells it connects. If we can find a way of mapping the connections between nerve cells of certain functions, we will then be in a position to begin developing a computer model to explain how the complex dynamics of neural networks generate thoughts, sensations and movements."

Nerve cells in different areas of the brain perform different functions. Dr Mrsic-Flogel and colleagues focus on the visual cortex, which processes information from the eye. For example, some neurons in this part of the brain specialise in detecting the edges in images; some will activate upon detection of a horizontal edge, others by a vertical edge. Higher up in visual hierarchy, some neurons respond to more complex visual features such as faces: lesions to this area of the brain can prevent people from being able to recognise faces, even though they can recognise individual features such as eyes and the nose, as was famously described in the book The Man Who Mistook Wife for a Hat by Oliver Sachs.

In a study published online April 10 in the journalNature, the team at UCL describe a technique developed in mice which enables them to combine information about the function of neurons together with details of their synaptic connections.

The researchers looked into the visual cortex of the mouse brain, which contains thousands of neurons and millions of different connections. Using high resolution imaging, they were able to detect which of these neurons responded to a particular stimulus, for example a horizontal edge.

Taking a slice of the same tissue, the researchers then applied small currents to a subset of neurons in turn to see which other neurons responded -- and hence which of these were synaptically connected. By repeating this technique many times, the researchers were able to trace the function and connectivity of hundreds of nerve cells in visual cortex.

The study has resolved the debate about whether local connections between neurons are random -- in other words, whether nerve cells connect sporadically, independent of function -- or whether they are ordered, for example constrained by the properties of the neuron in terms of how it responds to particular stimuli. The researchers showed that neurons which responded very similarly to visual stimuli, such as those which respond to edges of the same orientation, tend to connect to each other much more than those that prefer different orientations.

Using this technique, the researchers hope to begin generating a wiring diagram of a brain area with a particular behavioural function, such as the visual cortex. This knowledge is important for understanding the repertoire of computations carried out by neurons embedded in these highly complex circuits. The technique should also help reveal the functional circuit wiring of regions that underpin touch, hearing and movement.

"We are beginning to untangle the complexity of the brain," says Dr Mrsic-Flogel."Once we understand the function and connectivity of nerve cells spanning different layers of the brain, we can begin to develop a computer simulation of how this remarkable organ works. But it will take many years of concerted efforts amongst scientists and massive computer processing power before it can be realised."

The research was supported by the Wellcome Trust, the European Research Council, the European Molecular Biology Organisation, the Medical Research Council, the Overseas Research Students Award Scheme and UCL.

"The brain is an immensely complex organ and understanding its inner workings is one of science's ultimate goals," says Dr John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust."This important study presents neuroscientists with one of the key tools that will help them begin to navigate and survey the landscape of the brain."


Source

Monday, April 11, 2011

Artificial Intelligence for Improving Data Processing

Within this framework, five leading scientists presented the latest advances in their research work on different aspects of AI. The speakers tackled issues ranging from the more theoretical such as algorithms capable of solving combinatorial problems to robots that can reason about emotions, systems that use vision to monitor activities, and automated players that learn how to win in a given situation."Inviting speakers from groups of references allows us to offer a panoramic view of the main problems and the techniques open in the area, including advances in video and multi-sensor systems, task planning, automated learning, games, and artificial consciousness or reasoning," the experts noted.

The participants from the AVIRES (The Artificial Vision and Real Time Systems) research group at the University of Udine gave a seminar on the introduction of data fusion techniques and distributed artificial vision. In particular, they dealt with automated surveillance systems with visual sensor networks, from basic techniques for image processing and object recognition to Bayesian reasoning for understanding activities and automated learning and data fusion to make high performance system. Dr.Simon Lucas, professor at the Essex University and editor in chief of IEEE Transactions on Computational Intelligence and AI in Games and a researcher focusing on the application of AI techniques on games, presented the latest trends in generation algorithms for game strategies. During his presentation, he pointed out the strength of UC3M in this area, citing its victory in two of the competitions held at the international level during the most recent edition of the Conference on Computational Intelligence and Games.

In addition, Enrico Giunchiglia, professor at the University of Genoa and former president of the Council of the International Conference on Automated Planning and Scheduling (ICAPS), described the most recent work in the area of logic satisfaction, which is rapidly growing due to its applications in circuit design and in task planning

Artificial Intelligence (IA) is as old as computer science and has generated ideas, techniques and applications that permit it to solve difficult problems. The field is very active and offers solutions to very diverse sectors. The number of industrial applications that have an AI technique is very high, and from the scientific point of view, there are many specialized journals and congresses. Furthermore, new lines of research are constantly being open and there is a still great room for improvement in knowledge transfer between researchers and industry. These are some of the main ideas gathered at the 4th International Seminar on New Issues on Artificial Intelligence), organized by the SCALAB group in the UC3M Computer Engineering Department at the Leganés campus of this Madrid university.

The future of Artificial Intelligence

This seminar also included a talk on the promising future of AI."The tremendous surge in the number of devices capable of capturing and processing information, together with the growth of the computing capacity and the advances in algorithms enormously boost the possibilities for practical application," the researchers from the SCALAB group pointed out. Among them we can cite the construction of computer programs that make life easier, which take decisions in complex environments or which allow problems to be solved in environments which are difficult to access for people," he noted. From the point of view of these research trends, more and more emphasis is being placed on developing systems capable of learning and demonstrating intelligent behavior without being tied to replicating a human model.

AI will allow advances in the development of systems capable of automatically understanding a situation and its context with the use of sensor data and information systems as well as establishing plans of action, from support applications to decision making within dynamic situations. According to the researchers, this is due to the rapid advances and the availability of sensor technology which provides a continuous flow of data about the environment, information that must be dealt with appropriately in a node of data fusion and information. Likewise, the development of sophisticated techniques for task planning allow plans of action to be composed, executed, checked for correct execution, and rectified in case of some failure, and finally to learn from mistakes made.

This technology has allowed a wide range of applications such as integrated systems for surveillance, monitoring and detecting anomalies, activity recognition, teleassistence systems, transport logistic planning, etc. According to Antonio Chella, Full Professor at the University of Palermo and expert in Artificial Consciousness, the future of AI will imply discovering a new meaning of the word"intelligence." Until now, it has been equated with automated reasoning in software systems, but in the future AI will tackle more daring concepts such as the incarnation of intelligence in robots, as well as emotions, and above all consciousness.


Source

Saturday, April 9, 2011

Control the Cursor With Power of Thought

In a new study, scientists from Washington University demonstrated that humans can control a cursor on a computer screen using words spoken out loud and in their head, holding huge applications for patients who may have lost their speech through brain injury or disabled patients with limited movement.

By directly connecting the patient's brain to a computer, the researchers showed that the computer could be controlled with up to 90% accuracy even when no prior training was given.

The study, published April 7, in IOP Publishing'sJournal of Neural Engineering, involves a technique called electrocortiography (ECoG) -- the placing of electrodes directly onto a patient's brain to record electrical activity -- which has previously been used to identify regions of the brain that cause epilepsy and has led to effective treatments.

More recently, the process of ECoG has been applied to brain-computer interfaces (BCI) which aim to assist or repair brain functions and have already been used to restore the sight of one patient and stimulate limb movement in others.

The study used four patients, between the ages of 36-48, who suffered from epilepsy. Each patient was given a craniotomy -- an invasive procedure used to place an electrode onto the brain of the patient -- and was monitored whilst undergoing trials.

During the trials, the electrodes placed on the patient's brain would emit signals which were acquired, processed, and stored on a computer.

The trials involved the patients sitting in front of a screen and trying to move a cursor toward a target using pre-defined words that were associated with specific directions. For instance, saying or thinking of the word"AH" would move the cursor right.

At some point in the future researchers hope to permanently insert implants into a patient's brain to help restore functionality and, even more impressively, read someone's mind.

Dr. Eric C Leuthardt, the lead author, of Washington University School of Medicine, said:"This is one of the earliest examples, to a very, very small extent, of what is called 'reading minds' -- detecting what people are saying to themselves in their internal dialogue."

This study was the first to demonstrate microscale ECoG recordings meaning that future operations that require this technology may use an implant that is very small and minimally invasive.

Also, the study identified that speech intentions can be acquired through a site that is less than a centimetre wide which would require only a small insertion into the brain. This would greatly reduce the risk of a surgical procedure.

Dr Leuthardt continued,"We want to see if we can not just detect when you're saying dog, tree, tool or some other word, but also learn what the pure idea of that looks like in your mind. It's exciting and a little scary to think of reading minds, but it has incredible potential for people who can't communicate or are suffering from other disabilities."


Source

Friday, April 8, 2011

Mathematical Model Simulating Rat Whiskers Provides Insight Into Sense of Touch

Hundreds of papers are published each year that use the rat whisker system as a model to understand brain development and neural processing. Rats move their whiskers rhythmically against objects to explore the environment by touch. Using only tactile information from its whiskers, a rat can determine all of an object's spatial properties, including size, shape, orientation and texture.

But there is a big missing piece that prevents a full understanding of the neural signals recorded in these studies: no one knows how to represent the"touch" of a whisker in terms of mechanical variables.

"We don't understand touch nearly as well as other senses," says Mitra Hartmann, associate professor of biomedical engineering and mechanical engineering at the McCormick School of Engineering and Applied Science."We know that visual and auditory stimuli can be quantified by the intensity and frequency of light and sound, but we don't fully understand the mechanics that generate our sense of touch."

To create a model that starts to quantify these mechanics, Hartmann's team first studied the structure of the rat whisker array -- the 30 whiskers arranged in a regular pattern on each side of a rat's face. By analyzing them in both two- and three-dimensional scans, they defined the relationship between the size and shape of each whisker and its placement on the face of the rat.

Using this information, the team created a model that quantifies the full shape and structure of the rat head and whisker array. The model now allows the team to simulate the rat"whisking" against different objects and to predict the full pattern of inputs into the whisker system as a rat encounters an object. The simulations can then be compared against real behavior.

The research is published online in the journalPLoS Computational Biology.

Understanding the mechanics of the rat whisker system may provide a step toward understanding the human sense of touch.

"The big question our laboratory is interested in is how do animals, including humans, actively move their sensors through the environment and somehow turn that sensory data into a stable perception of the world," Hartmann says.

To determine how a rat can sense the shape of an object, Hartmann's team previously developed a light sheet to monitor the precise locations of the whiskers as they came in contact with the object. Using high-speed video, the team can also analyze how the rat moves its head to explore different shapes. These behavioral observations can then be paired with the output from the model.

These advances will provide insight into the sense of touch but may also enable new technologies that could make use of the whisker system. For example, Hartmann's lab created arrays of robotic whiskers that can, in several respects, mimic the capabilities of mammalian whiskers. The researchers demonstrated that these arrays can sense information about both object shape and fluid flow.

"We show that the bending moment, or torque, at the whisker base can be used to generate three-dimensional spatial representations of the environment," Hartmann says."We used this principle to make arrays of robotic whiskers that can replicate much of the basic mechanics of rat whiskers." The technology, she said, could be used to extract the three-dimensional features of almost any solid object.

Hartmann envisions that a better understanding of the whisker system may be useful for engineering applications in which the use of cameras is limited. But most importantly, a better understanding of the rat whisker system could translate into a better understanding of ourselves.

"Although whiskers and hands are very different, the basic neural pathways that process tactile information are in many respects similar across mammals," Hartmann says."A better understanding of neural processing in the whisker system may provide insights into how our own brains process information."

In addition to Hartmann, other authors of the paper are Blythe Towal, Brian Quist and Joseph Solomon, all of Northwestern, and Venkatesh Gopal of Elmhurst College.


Source