Google Search

Wednesday, July 25, 2012

Skeleton key: Diverse complex networks have similar skeletons

ScienceDaily (June 1, 2012) — Northwestern University researchers are the first to discover that very different complex networks -- ranging from global air traffic to neural networks -- share very similar backbones. By stripping each network down to its essential nodes and links, they found each network possesses a skeleton and these skeletons share common features, much like vertebrates do.

Mammals have evolved to look very different despite a common underlying structure (think of a human being and a bat), and now it appears real-world complex networks evolve in a similar way.

The researchers studied a variety of biological, technological and social networks and found that all these networks have evolved according to basic growth mechanisms. The findings could be particularly useful in understanding how something -- a disease, a rumor or information -- spreads across a network.

This surprising discovery -- that networks all have skeletons and that they are similar -- was published this week by the journal Nature Communications.

"Infectious diseases such as H1N1 and SARS spread in a similar way, and it turns out the network's skeleton played an important role in shaping the global spread," said Dirk Brockmann, senior author of the paper. "Now, with this new understanding and by looking at the skeleton, we should be able to use this knowledge in the future to predict how a new outbreak might spread."

Brockmann is associate professor of engineering sciences and applied mathematics at the McCormick School of Engineering and Applied Science and a member of the Northwestern Institute on Complex Systems (NICO).

Complex systems -- such as the Internet, Facebook, the power grid, human consciousness, even a termite colony -- generate complex behavior. A system's structure emerges locally; it is not designed or planned. Components of a network work together, interacting and influencing each other, driving the network's evolution.

For years, researchers have been trying to determine if different networks from different disciplines have hidden core structures -- backbones -- and, if so, what they look like. Extracting meaningful structural features from data is one of the most challenging tasks in network theory.

Brockmann and two of his graduate students, Christian Thiemann and first author Daniel Grady, developed a method to identify a network's hidden core structure and showed that the skeletons possess some underlying and universal features.

The networks they studied differed in size (from hundreds of nodes to thousands) and in connectivity (some were sparsely connected, others dense) but a simple and similar core skeleton was found in each one.

"The key to our approach was asking what network elements are important from each node's perspective," Brockmann said. "What links are most important to each node, and what is the consensus among nodes? Interestingly, we found that an unexpected degree of consensus exists among all nodes in a network. Nodes either agree that a link is important or they agree that it isn't. There is nearly no disagreement."

By computing this consensus -- the overall strength, or importance, of each link in the network -- the researchers were able to produce a skeleton for each network consisting of all those links that every node considers important. And these skeletons are similar across networks.

Because of this "consensus" property, the researchers' method does not have the drawbacks of other methods, which have degrees of arbitrariness in them and depend on parameters. The Northwestern approach is very robust and identifies essential hubs and links in a non-arbitrary universal way.

The Volkswagen Foundation supported the research.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Northwestern University. The original article was written by Megan Fellman.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Daniel Grady, Christian Thiemann, Dirk Brockmann. Robust classification of salient links in complex networks. Nature Communications, 2012; 3: 864 DOI: 10.1038/ncomms1847

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New statistical model lets patient's past forecast future ailments

ScienceDaily (June 4, 2012) — Analyzing medical records from thousands of patients, statisticians have devised a statistical model for predicting what other medical problems a patient might encounter.

Like how Netflix recommends movies and TV shows or how Amazon.com suggests products to buy, the algorithm makes predictions based on what a patient has already experienced as well as the experiences of other patients showing a similar medical history.

"This provides physicians with insights on what might be coming next for a patient, based on experiences of other patients. It also gives a predication that is interpretable by patients," said Tyler McCormick, an assistant professor of statistics and sociology at the University of Washington.

The algorithm will be published in an upcoming issue of the journal Annals of Applied Statistics. McCormick's co-authors are Cynthia Rudin, Massachusetts Institute of Technology, and David Madigan, Columbia University.

McCormick said that this is one of the first times that this type of predictive algorithm has been used in a medical setting. What differentiates his model from others, he said, is that it shares information across patients who have similar health problems. This allows for better predictions when details of a patient's medical history are sparse.

For example, new patients might lack a lengthy file listing ailments and drug prescriptions compiled from previous doctor visits. The algorithm can compare the patient's current health complaints with other patients who have a more extensive medical record that includes similar symptoms and the timing of when they arise. Then the algorithm can point to what medical conditions might come next for the new patient.

"We're looking at each sequence of symptoms to try to predict the rest of the sequence for a different patient," McCormick said. If a patient has already had dyspepsia and epigastric pain, for instance, heartburn might be next.

The algorithm can also accommodate situations where it's statistically difficult to predict a less common condition. For instance, most patients do not experience strokes, and accordingly most models could not predict one because they only factor in an individual patient's medical history with a stroke. But McCormick's model mines medical histories of patients who went on to have a stroke and uses that analysis to make a stroke prediction.

The statisticians used medical records obtained from a multiyear clinical drug trial involving tens of thousands of patients aged 40 and older. The records included other demographic details, such as gender and ethnicity, as well as patients' histories of medical complaints and prescription medications.

They found that of the 1,800 medical conditions in the dataset, most of them -- 1,400 -- occurred fewer than 10 times. McCormick and his co-authors had to come up with a statistical way to not overlook those 1,400 conditions, while alerting patients who might actually experience those rarer conditions.

They came up with a statistical modeling technique that is grounded in Bayesian methods, the backbone of many predictive algorithms. McCormick and his co-authors call their approach the Hierarchical Association Rule Model and are working toward making it available to patients and doctors.

"We hope that this model will provide a more patient-centered approach to medical care and to improve patient experiences," McCormick said.

The work was funded by a Google Ph.D. fellowship awarded to McCormick and by the National Science Foundation.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Washington, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Tyler H. McCormick, Cynthia Rudin and David Madigan. Bayesian Hierarchical Rule Modeling for Predicting Medical Conditions. Annals of Applied Statistics, 2012 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, July 24, 2012

Flexible channel width improves user experience on wireless systems

ScienceDaily (June 4, 2012) — Researchers from North Carolina State University have developed a technique to efficiently divide the bandwidth of the wireless spectrum in multi-hop wireless networks to improve operation and provide all users in the network with the best possible performance.

"Our objective is to maximize throughput while ensuring that all users get similar 'quality of experience' from the wireless system, meaning that users get similar levels of satisfaction from the performance they experience from whatever applications they're running," says Parth Pathak, a Ph.D. student in computer science at NC State and lead author of a paper describing the research.

Multi-hop wireless networks use multiple wireless nodes to provide coverage to a large area by forwarding and receiving data wirelessly between the nodes. However, because they have limited bandwidth and may interfere with each other's transmissions, these networks can have difficulty providing service fairly to all users within the network. Users who place significant demands on network bandwidth can effectively throw the system off balance, with some parts of the network clogging up while others remain underutilized.

Over the past few years, new technology has become available that could help multi-hop networks use their wireless bandwidth more efficiently by splitting the band into channels of varying sizes, according to the needs of the users in the network. Previously, it was only possible to form channels of equal size. However, it was unclear how multi-hop networks could take advantage of this technology, because there was not a clear way to determine how these varying channel widths should be assigned.

Now an NC State team has advanced a solution to the problem.

"We have developed a technique that improves network performance by determining how much channel width each user needs in order to run his or her applications," says Dr. Rudra Dutta, an associate professor of computer science at NC State and co-author of the paper. "This technique is dynamic. The channel width may change -- becoming larger or smaller -- as the data travels between nodes in the network. The amount of channel width allotted to users is constantly being modified to maximize the efficiency of the system and avoid what are, basically, data traffic jams."

In simulation models, the new technique results in significant improvements in a network's data throughput and in its "fairness" -- the degree to which all network users benefit from this throughput.

The researchers hope to test the technique in real-world conditions using CentMesh, a wireless network on the NC State campus.

The paper, "Channel Width Assignment Using Relative Backlog: Extending Back-pressure to Physical Layer," was co-authored by former NC State master's student Sankalp Nimborkhar. The paper will be presented June 12 at the 13th International Symposium on Mobile Ad Hoc Networking and Computing in Hilton Head, S.C. The research was supported by the U.S. Army Research Office and the Secure Open Systems Initiative at NC State.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by North Carolina State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, July 23, 2012

Quantum computers will be able to simulate particle collisions

ScienceDaily (June 1, 2012) — Quantum computers are still years away, but a trio of theorists has already figured out at least one talent they may have. According to the theorists, including one from the National Institute of Standards and Technology (NIST), physicists might one day use quantum computers to study the inner workings of the universe in ways that are far beyond the reach of even the most powerful conventional supercomputers.

Quantum computers require technology that may not be perfected for decades, but they hold great promise for solving complex problems. The switches in their processors will take advantage of quantum mechanics -- the laws that govern the interaction of subatomic particles. These laws allow quantum switches to exist in both on and off states simultaneously, so they will be able to consider all possible solutions to a problem at once.

This unique talent, far beyond the capability of today's computers, could enable quantum computers to solve some currently difficult problems quickly, such as breaking complex codes. But they could look at more challenging problems as well.

"We have this theoretical model of the quantum computer, and one of the big questions is, what physical processes that occur in nature can that model represent efficiently?" said Stephen Jordan, a theorist in NIST's Applied and Computational Mathematics Division. "Maybe particle collisions, maybe the early universe after the Big Bang? Can we use a quantum computer to simulate them and tell us what to expect?"

Questions like these involve tracking the interaction of many different elements, a situation that rapidly becomes too complicated for today's most powerful computers.

The team developed an algorithm -- a series of instructions that can be run repeatedly -- that could run on any functioning quantum computer, regardless of the specific technology that will eventually be used to build it. The algorithm would simulate all the possible interactions between two elementary particles colliding with each other, something that currently requires years of effort and a large accelerator to study.

Simulating these collisions is a very hard problem for today's digital computers because the quantum state of the colliding particles is very complex and, therefore, difficult to represent accurately with a feasible number of bits. The team's algorithm, however, encodes the information that describes this quantum state far more efficiently using an array of quantum switches, making the computation far more reasonable.

A substantial amount of the work on the algorithm was done at the California Institute of Technology, while Jordan was a postdoctoral fellow. His coauthors are fellow postdoc Keith S.M. Lee (now a postdoc at the University of Pittsburgh) and Caltech's John Preskill, the Richard P. Feynman Professor of Theoretical Physics.

The team used the principles of quantum mechanics to prove their algorithm can sum up the effects of the interactions between colliding particles well enough to generate the sort of data that an accelerator would provide.

"What's nice about the simulation is that you can raise the complexity of the problem by increasing the energy of the particles and collisions, but the difficulty of solving the problem does not increase so fast that it becomes unmanageable," Preskill says. "It means a quantum computer could handle it feasibly."

Though their algorithm only addresses one specific type of collision, the team speculates that their work could be used to explore the entire theoretical foundation on which fundamental physics rests.

"We believe this work could apply to the entire standard model of physics," Jordan says. "It could allow quantum computers to serve as a sort of wind tunnel for testing ideas that often require accelerators today."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by National Institute of Standards and Technology (NIST).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

S. P. Jordan, K. S. M. Lee, J. Preskill. Quantum Algorithms for Quantum Field Theories. Science, 2012; 336 (6085): 1130 DOI: 10.1126/science.1217069

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, July 22, 2012

Understanding complex relationships: How global properties of networks become apparent locally

ScienceDaily (June 7, 2012) — From infections spreading around the globe to the onset of an epileptic seizure in the brain: Many phenomena can be seen as the effects of network activity. Often it is vitally important to understand the properties of these networks. However, they are often too complex to be described completely. Scientists from the Bernstein Center at the University of Freiburg were now able to show how global features of complex networks can be discovered in local statistical properties -- which are much more accessible for scientific investigation. The researchers were able to benefit from the high-performance computing facilities of the Bernstein Center, which are normally used to simulate the activity of nerve cells in the brain.

In an article appearing in the scientific journal PLoS ONE, Stefano Cardanobile and colleagues describe how they analysed 200,000 networks which they generated in a computer -- using models that are employed by scientists to understand the properties of naturally occurring networks. The researchers compared the results obtained from these models with well-understood networks from the real world: the metabolism of a bacterium, the relationship of synonyms in a thesaurus, and the nervous system of a worm. Thus, they were able to assess which model networks can predict the behaviour of its real-life counterpart the best. These insights can help colleagues from other fields to choose the right model in their specific research.

Most importantly, the scientists from Freiburg could demonstrate that it is possible to draw conclusions about global properties of complex networks from local statistical data. This means that one can discover important properties of networks even if they are not completely analysed -- very often an impossible task in large systems such as human social contacts or connections in the brain. Therefore, the authors see their study to represent an important step towards a better understanding of complex networks.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Albert-Ludwigs-Universität Freiburg.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Stefano Cardanobile, Volker Pernice, Moritz Deger, Stefan Rotter. Inferring General Relations between Network Characteristics from Specific Network Ensembles. PLoS ONE, 2012; 7 (6): e37911 DOI: 10.1371/journal.pone.0037911

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Searching genomic data faster: Biologists' capacity for generating genomic data is increasing more rapidly than computing power

ScienceDaily (July 10, 2012) — In 2001, the Human Genome Project and Celera Genomics announced that after 10 years of work at a cost of some $400 million, they had completed a draft sequence of the human genome. Today, sequencing a human genome is something that a single researcher can do in a couple of weeks for less than $10,000.

Since 2002, the rate at which genomes can be sequenced has been doubling every four months or so, whereas computing power doubles only every 18 months. Without the advent of new analytic tools, biologists' ability to generate genomic data will soon outstrip their ability to do anything useful with it.

In the latest issue of Nature Biotechnology, MIT and Harvard University researchers describe a new algorithm that drastically reduces the time it takes to find a particular gene sequence in a database of genomes. Moreover, the more genomes it's searching, the greater the speedup it affords, so its advantages will only compound as more data is generated.

In some sense, this is a data-compression algorithm -- like the one that allows computer users to compress data files into smaller zip files. "You have all this data, and clearly, if you want to store it, what people would naturally do is compress it," says Bonnie Berger, a professor of applied math and computer science at MIT and senior author on the paper. "The problem is that eventually you have to look at it, so you have to decompress it to look at it. But our insight is that if you compress the data in the right way, then you can do your analysis directly on the compressed data. And that increases the speed while maintaining the accuracy of the analyses."

Exploiting redundancy

The researchers' compression scheme exploits the fact that evolution is stingy with good designs. There's a great deal of overlap in the genomes of closely related species, and some overlap even in the genomes of distantly related species: That's why experiments performed on yeast cells can tell us something about human drug reactions.

Berger; her former grad student Michael Baym PhD '09, who's now a visiting scholar in the MIT math department and a postdoc in systems biology at Harvard Medical School; and her current grad student Po-Ru Loh developed a way to mathematically represent the genomes of different species -- or of different individuals within a species -- such that the overlapping data is stored only once. A search of multiple genomes can thus concentrate on their differences, saving time.

"If I want to run a computation on my genome, it takes a certain amount of time," Baym explains. "If I then want to run the same computation on your genome, the fact that we're so similar means that I've already done most of the work."

In experiments on a database of 36 yeast genomes, the researchers compared their algorithm to one called BLAST, for Basic Local Alignment Search Tool, one of the most commonly used genomic-search algorithms in biology. In a search for a particular genetic sequence in only 10 of the yeast genomes, the new algorithm was twice as fast as BLAST; but in a search of all 36 genomes, it was four times as fast. That discrepancy will only increase as genomic databases grow larger, Berger explains.

Matchmaking

The new algorithm would be useful in any application where the central question is, as Baym puts it: "I have a sequence; what is it similar to?" Identifying microbes is one example. The new algorithm could help clinicians determine causes of infections, or it could help biologists characterize "microbiomes," collections of microbes found in animal tissue or particular microenvironments; variations in the human microbiome have been implicated in a range of medical conditions. It could be used to characterize the microbes in particularly fertile or infertile soil, and it could even be used in forensics, to determine the geographical origins of physical evidence by its microbial signatures.

"The problem that they're looking at -- which is, given a sequence, trying to determine what known sequences are similar to it -- is probably the oldest problem in computational biology, and it's perhaps the most commonly asked question in computational biology," says Mona Singh, a professor of computer science at Princeton University and a faculty member at Princeton's Lewis-Sigler Institute for Integrative Genomics. "And the problem, just for that reason, is of central importance."

In the last 10 years, Singh says, biologists have tended to think in terms of "reference genomes" -- genomes, such as the draft human sequence released in 2001, that try to generalize across individuals within a species and even across species. "But as we're getting more and more individuals even within a species, and more very closely related sequenced distinct species, I think we're starting to move away from the idea of a single reference genome," Singh says. "Their approach is really going to shine when you have many closely related organisms."

Berger's group is currently working to extend the technique to information on proteins and RNA sequences, where it could pay even bigger dividends. Now that the human genome has been mapped, the major questions in biology are what genes are active when, and how the proteins they code for interact. Searches of large databases of biological information are crucial to answering both questions.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Po-Ru Loh, Michael Baym, Bonnie Berger. Compressive genomics. Nature Biotechnology, 2012; 30 (7): 627 DOI: 10.1038/nbt.2241

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, July 21, 2012

Cyberwarfare, conservation and disease prevention could benefit from new network model

ScienceDaily (July 10, 2012) — Computer networks are the battlefields in cyberwarfare, as exemplified by the United States' recent use of computer viruses to attack Iran's nuclear program. A computer model developed at the University of Missouri could help military strategists devise the most damaging cyber attacks as well as guard America's critical infrastructure. The model also could benefit other projects involving interconnected groups, such as restoring ecosystems, halting disease epidemics and stopping smugglers.

"Our model allows users to identify the best or worst possible scenarios of network change," said Tim Matisziw, assistant professor of geography and engineering at MU. "The difficulty in evaluating a networks' resilience is that there are an infinite number of possibilities, which makes it easy to miss important scenarios. Previous studies focused on the destruction of large hubs in a network, but we found that in many cases the loss of smaller facilities can be just as damaging. Our model can suggest ways to have the maximum impact on a network with the minimum effort."

Limited resources can hinder law enforcement officers' ability to stop criminal organizations. Matisziw's model could help design plans which efficiently use a minimum of resources to cause the maximum disruption of trafficking networks and thereby reduce flows of drugs, weapons and exploited people. In a similar fashion, disease outbreaks could be mitigated by identifying and then blocking important links in their transmission, such as airports.

However, there are some networks that society needs to keep intact. After the breakdown of such a network, the model can be used to evaluate what could have made the disruption even worse and help officials prevent future problems. For example, after an electrical grid failure, such as the recent blackout in the eastern United States, future system failures could be pinpointed using the model. The critical weak points in the electrical grid could then be fortified before disaster strikes.

The model also can determine if a plan is likely to create the strongest network possible. For example, when construction projects pave over wetland ecosystems, the law requires that new wetlands be created. However, ecologists have noted that these new wetlands are often isolated from existing ecosystems and have little value to wildlife. Matisziw's model could help officials plan the best places for new wetlands so they connect with other natural areas and form wildlife corridors or stretches of wilderness that connect otherwise isolated areas and allow them to function as one ecosystem.

Matisziw's model was documented in the publicly available journal PLoS ONE. Making such a powerful tool widely available won't be a danger, Matisziw said. To use his model, a network must be understood in detail. Since terrorists and other criminals don't have access to enough data about the networks, they won't be able to use the model to develop doomsday scenarios.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Missouri-Columbia, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Timothy C. Matisziw, Tony H. Grubesic, Junyu Guo. Robustness Elasticity in Complex Networks. PLoS ONE, 2012; 7 (7): e39788 DOI: 10.1371/journal.pone.0039788

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Slashing energy needs for next-generation memory

ScienceDaily (June 7, 2012) — Researchers from Rice University and UCLA unveiled a new data-encoding scheme this week that slashes more than 30 percent of the energy needed to write data onto new memory cards that use "phase-change memory" (PCM) -- a competitor to flash memory that has big backing from industry heavyweights.

The breakthrough was presented at the IEEE/ACM Design Automation Conference (DAC) in San Francisco by researchers from Rice University's Adaptive Computing and Embedded Systems (ACES) Laboratory.

PCM uses the same type of materials as those used in rewritable CDs and DVDs, and it does the same job as flash memory -- the mainstay technology in USB thumb drives and memory cards for cameras and other devices. IBM and Samsung have each demonstrated PCM breakthroughs in recent months, and PCM is ultimately expected to be faster, cheaper and more energy-efficient than flash.

"We developed an optimization framework that exploits asymmetries in PCM read/write to minimize the number of bit transitions, which in turns yields energy and endurance efficiency," said researcher Azalia Mirhoseini, a Rice graduate student in electrical and computer engineering, who presented the research results at DAC.

In PCM technology, heat-sensitive materials are used to store data as ones and zeros by changing the material resistance. The electronic properties of the material change from low resistance to high resistance when heat is applied to alter the arrangement of atoms from a conducting, crystalline structure to a nonconducting, glassy structure. Writing data on PCM takes a fraction of the time required to write on flash memory, and the process is reversible but asymmetric; creating one state requires a short burst of intense heat, and reversing that state requires more time and less heat.

The new encoding method is the first to take advantage of these asymmetric physical properties. One key to the encoding scheme is reading the existing data before new data is written. Using a combination of programming approaches, the researchers created an encoder that can scan the "words" -- short sections of bits on the card -- and overwrite only the parts of the words that need to be overwritten.

"One part of the method is based on dynamic programming, which starts from small codes that we show to be optimal, and then builds upon these small codes to rapidly search for improved, longer codes that minimize the bit transitions," said lead researcher Farinaz Koushanfar, director of Rice's ACES Laboratory and assistant professor of electrical and computer engineering and of computer science at Rice.

The second part of the new method is based on integer-linear programming (ILP), a technique that can find optimal solutions. The more complex the solution, the longer ILP takes to find the optimal solution, so the team found a shortcut by using dynamic programming to create a cheat sheet of small codes that could be quickly combined for more complex solutions.

Research collaborator Miodrag Potkonjak, professor of computer science at UCLA, said the team's solution to PCM optimization is pragmatic.

"The overhead for ILP is practical because the codes are found only once, during the design phase," Potkonjak said. "The codes are stored for later use during PCM operation."

The researchers also found the new encoding scheme cut more than 40 percent of "memory wear," the exhaustion of memory due to rewrites. Each memory cell can handle a limited number of rewrite cycles before it becomes unusable.

The researchers said the applicability, low overhead and efficiency of the proposed optimization methods were demonstrated with extensive evaluations on benchmark data sets. In addition to PCM, they said, the encoding method is also applicable for other types of bit-accessible memories, including STT-RAM, or spin-transfer torque random-access memory.

The research was funded by the Office of Naval Research, the Army Research Office and the National Science Foundation.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Rice University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, July 19, 2012

Search engine for social networks based on the behavior of ants

ScienceDaily (June 4, 2012) — Research at Carlos III University (Universidad Carlos III) in Madrid (Universidad Carlos III -- UC3M) is developing an algorithm, based on ants' behavior when they are searching for food, which accelerates the search for relationships among elements that are present in social networks.

One of the main technical questions in the field of social networks, whose use is becoming more and more generalized, consists in locating the chain of reference that leads from one person to another, from one node to another. The greatest challenges that are presented in this area is the enormous size of these networks and the fact that the response must be rapid, given that the final user expects results in the shortest time possible. In order to find a solution to this problem, these researchers from UC3M have developed an algorithm SoSACO, which accelerates the search for routes between two nodes that belong to a graph that represents a social network.

The way SoSACO works was inspired by behavior that has been perfected over thousands of years by one of the most disciplined insects on the planet when they search for food. In general, the algorithms used by colonies of ants imitate how they are capable of finding the path between the anthill and the source of food by secreting and following a chemical trail, called a pheromone, which is deposited on the ground.

"In this study -- the authors explain -- other scented trails are also included so that the ants can follow both the pheromone as well as the scent of the food, which allows them to find the food source much more quickly." The main results of this research, which was carried out by Jessica Rivero in UC3M's Laboratorio de Bases de Datos Avanzadas (The Advanced Data Bases Laboratory -- LABDA) as part of her doctoral thesis, are summarized in a scientific article published in the journal Applied Intelligence. "The early results show that the application of this algorithm to real social networks obtains an optimal response in a very short time (tens of milliseconds)," Jessica Rivero states.

Multiple applications

Thanks to this new search algorithm, the system can find these routes more easily, and without modifying the structure of graph (an image that uses nodes and links to represents the relationships among a set of elements). "This advance allows us to solve many problems that we find in the real world, because the scenarios in which they occur can be modeled by a graph," the researchers explain. Thus, it could be applied in many different scenarios, such as to improve locating routes in FPS systems or in on-line games, to plan deliveries for freight trucks, to know if two words are somehow related or to simply know exactly which affinities two Facebook or Twitter users, for example, have in common.

This research, which has received support from the Autonomous Community of Madrid (MA2VICMR, S2009/TIC-1542) and the Ministry of Education and Science (Ministerio de Educación y Ciencia), began as part of the SOPAT project (TSI-020110-2009-419), in response to the need to guide a hotel's clients using a natural interaction system. Jessica Rivero's doctoral thesis, which deals with this subject, is titled "Búsqueda Rápida de Caminos en Grafos de Alta Cardinalidad Estáticos y Dinámicos" ("A Quick Search for Routes in Static and Dynamic Graphs of High Cardinality"); it was directed by Francisco Javier Calle y Mª Dolores Cuadra, professors in the LABDA of the Computer Science Department, and received a grade of Apto-Cum Laude (Pass-Cum Laude).

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Universidad Carlos III de Madrid - Oficina de Información Científica, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Quantum computers move closer to reality, thanks to highly enriched and highly purified silicon

ScienceDaily (June 7, 2012) — The quantum computer is a futuristic machine that could operate at speeds even more mind-boggling than the world's fastest super-computers.

Research involving physicist Mike Thewalt of Simon Fraser University offers a new step towards making quantum computing a reality, through the unique properties of highly enriched and highly purified silicon.

Quantum computers right now exist pretty much in physicists' concepts, and theoretical research. There are some basic quantum computers in existence, but nobody yet can build a truly practical one -- or really knows how.

Such computers will harness the powers of atoms and sub-atomic particles (ions, photons, electrons) to perform memory and processing tasks, thanks to strange sub-atomic properties.

What Thewalt and colleagues at Oxford University and in Germany have found is that their special silicon allows processes to take place and be observed in a solid state that scientists used to think required a near-perfect vacuum.

And, using this 28Si they have extended to three minutes -- from a matter of seconds -- the time in which scientists can manipulate, observe and measure the processes.

"It's by far a record in solid-state systems," Thewalt says. "If you'd asked people a few years ago if this was possible, they'd have said no. It opens new ways of using solid-state semi-conductors such as silicon as a base for quantum computing.

"You can start to do things that people thought you could only do in a vacuum. What we have found, and what wasn't anticipated, are the sharp spectral lines (optical qualities) in the 28Silicon we have been testing. It's so pure, and so perfect. There's no other material like it."

But the world is still a long way from practical quantum computers, he notes.

Quantum computing is a concept that challenges everything we know or understand about today's computers.

Your desktop or laptop computer processes "bits" of information. The bit is a fundamental unit of information, seen by your computer has having a value of either "1" or "0."

That last paragraph, when written in Word, contains 181 characters including spaces. In your home computer, that simple paragraph is processed as a string of some 1,448 "1"s and "0"s.

But in the quantum computer, the "quantum bit" (also known as a "qubit") can be both a "1" and a "0" -- and all values between 0 and 1 -- at the same time.

Says Thewalt: "A classical 1/0 bit can be thought of as a person being either at the North or South Pole, whereas a qubit can be anywhere on the surface of the globe -- its actual state is described by two parameters similar to latitude and longitude."

Make a practical quantum computer with enough qubits available and it could complete in minutes calculations that would take today's super-computers years, and your laptop perhaps millions of years.

The work by Thewalt and his fellow researchers opens up yet another avenue of research and application that may, in time, lead to practical breakthroughs in quantum computing.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Simon Fraser University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

M. Steger, K. Saeedi, M. L. W. Thewalt, J. J. L. Morton, H. Riemann, N. V. Abrosimov, P. Becker, H.- J. Pohl. Quantum Information Storage for over 180 s Using Donor Spins in a 28Si 'Semiconductor Vacuum'. Science, 2012; 336 (6086): 1280 DOI: 10.1126/science.1217635

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, July 18, 2012

All things big and small: The brain's discerning taste for size

ScienceDaily (June 20, 2012) — The human brain can recognize thousands of different objects, but neuroscientists have long grappled with how the brain organizes object representation; in other words, how the brain perceives and identifies different objects. Now researchers at the MIT Computer Science and Artificial Intelligence Lab (CSAIL) and the MIT Department of Brain and Cognitive Sciences have discovered that the brain organizes objects based on their physical size, with a specific region of the brain reserved for recognizing large objects and another reserved for small objects.

Their findings, to be published in the June 21 issue of Neuron, could have major implications for fields like robotics, and could lead to a greater understanding of how the brain organizes and maps information.

"Prior to this study, nobody had looked at whether the size of an object was an important factor in the brain's ability to recognize it," said Aude Oliva, an associate professor in the MIT Department of Brain and Cognitive Sciences and senior author of the study.

"It's almost obvious that all objects in the world have a physical size, but the importance of this factor is surprisingly easy to miss when you study objects by looking at pictures of them on a computer screen," said Dr. Talia Konkle, lead author of the paper. "We pick up small things with our fingers, we use big objects to support our bodies. How we interact with objects in the world is deeply and intrinsically tied to their real-world size, and this matters for how our brain's visual system organizes object information."

As part of their study, Konkle and Oliva took 3D scans of brain activity during experiments in which participants were asked to look at images of big and small objects or visualize items of differing size. By evaluating the scans, the researchers found that there are distinct regions of the brain that respond to big objects (for example, a chair or a table), and small objects (for example, a paperclip or a strawberry).

By looking at the arrangement of the responses, they found a systematic organization of big to small object responses across the brain's cerebral cortex. Large objects, they learned, are processed in the parahippocampal region of the brain, an area located by the hippocampus, which is also responsible for navigating through spaces and for processing the location of different places, like the beach or a building. Small objects are handled in the inferior temporal region of the brain, near regions that are active when the brain has to manipulate tools like a hammer or a screwdriver.

The work could have major implications for the field of robotics, in particular in developing techniques for how robots deal with different objects, from grasping a pen to sitting in a chair.

"Our findings shed light on the geography of the human brain, and could provide insight into developing better machine interfaces for robots," said Oliva.

Many computer vision techniques currently focus on identifying what an object is without much guidance about the size of the object, which could be useful in recognition. "Paying attention to the physical size of objects may dramatically constrain the number of objects a robot has to consider when trying to identify what it is seeing," said Oliva.

The study's findings are also important for understanding how the organization of the brain may have evolved. The work of Konkle and Oliva suggests that the human visual system's method for organizing thousands of objects may also be tied to human interactions with the world. "If experience in the world has shaped our brain organization over time, and our behavior depends on how big objects are, it makes sense that the brain may have established different processing channels for different actions, and at the center of these may be size," said Konkle.

Oliva, a cognitive neuroscientist by training, has focused much of her research on how the brain tackles scene and object recognition, as well as visual memory. Her ultimate goal is to gain a better understanding of the brain's visual processes, paving the way for the development of machines and interfaces that can see and understand the visual world like humans do.

"Ultimately, we want to focus on how active observers move in the natural world. We think this not only matters for large-scale brain organization of the visual system, but it also matters for making machines that can see like us," said Konkle and Oliva.

This research was funded by a National Science Foundation Graduate Fellowship, and a National Eye Institute grant, and was conducted at the Athinoula A. Martinos Imaging Center at McGovern Institute for Brain Research, MIT.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology, CSAIL.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, July 17, 2012

Math formula leads researchers to source of pollution

ScienceDaily (June 25, 2012) — The leaking of environmentally damaging pollutants into our waters and atmosphere could soon be counteracted by a simple mathematical algorithm, according to researchers.

Presenting their research June 26, in IOP Publishing's journal Inverse Problems, the researchers, from Université de Technologie de Compiègne, believe their work could aid efforts to avoid environmental catastrophes by identifying the exact location where pollutants have been leaked as early as possible.

In the event of an oil spill across a region of the sea, researchers could collect samples of pollutants along certain sections of the body of water and then feed this information into their algorithm.

The algorithm is then able to determine two things: the rate at which the pollutant entered the body of water and where the pollutant came from.

This isn't the first time that mathematical algorithms have been used to solve this problem; however, this new approach is unique in that it could allow researchers to 'track' the source of a pollutant if it is moving or changing in strength.

Co-author of the study, Mr Mike Andrle, said: "In the unfortunate event of a pollutant spill, either by purposeful introduction into our waters or atmosphere, or by purely accidental fate, collaboration with scientists and engineers and application of this work may save precious moments to avert more environmental damage."

The algorithm itself is modelled on the general transport of a pollutant and takes three phenomena into account: diffusion, convection and reaction.

Diffusion is where the pollutant flows naturally from high concentrations to low concentrations and convection is where other factors cause the pollutant to displace, such as a current in the sea. A pollutant may also react with other materials in the water or settle on a seabed or lake floor: this is classified as 'reaction'.

The researchers add that other terms could also be added into the algorithm to account for the properties of different pollutants; for example, oil may not dissolve entirely in water and may form droplets, in which case the buoyancy and settling would need to be accounted for.

Their theoretical results have already shown that the result is unique; that is, the solution found is the only possible one given the observable data. The results were also shown to be very robust, which is extremely important in practice where such measurements often have relatively large errors associated with them.

Mr Andrle continued: "Growing up on Lake Erie, I heard of the previous shape it had been in where industry resulted in much of the lake being declared dead at one time. Though I was not alive to see it at its worst, I did witness how lots of legislation and new policies had turned its fate around.

"I saw a chance to contribute to research that may help mitigate causes of similar future events. We hope that the results of this work are substantially circulated so that those involved in pollution spill localisation and clean-up are aware of this solution."

The paper's second author, Professor Abdellatif El-Badia, said: "Inverse problems are very important in science, engineering and bioengineering. It is very interesting that we've been able to apply this topic to the very big problem of pollution."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Institute of Physics, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

M Andrle and A El Badia. Identification of multiple moving pollution sources in surface waters or atmospheric media with boundary observations. Inverse Problems, 2012 DOI: 10.1088/0266-5611/28/7/075009

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, July 16, 2012

Cloud computing: Same weakness found in seven cloud storage services

ScienceDaily (June 29, 2012) — Cloud storage services allow registration using false e-mail addresses and Fraunhofer SIT sees the possibility for espionage and malware distribution.

Security experts at the Fraunhofer Institute for Secure Information Technology (SIT) in Darmstadt have discovered that numerous cloud storage service providers do not check the e-mail addresses provided during the registration process. This fact in combination with functions provided by these service providers, such as file sharing or integrated notifications, result in various possibilities for attacks.

For example, attackers can bring malware into circulation or spy out confidential data. As one of the supporters of the Center for Advanced Security Research Darmstadt (CASED), the Fraunhofer SIT scrutinized various cloud storage services. The testers discovered the same weakness with the free service offerings from CloudMe, Dropbox, HiDrive, IDrive, SugarSync, Syncplicity and Wuala. Scientists from Fraunhofer SIT presented their findings on the possible forms of attacks on June 26, 2012, at the 11th International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom) in Liverpool.

Attackers do not require any programming knowledge whatsoever to exploit these weaknesses. All they need is to create an account using a false e-mail account. The attacker can then bring malware into circulation using another person's identity. With the services provided by Dropbox, IDrive, SugarSync, Syncplicity and Wuala, attackers can even spy on unsuspecting computer users with the help of the false e-mail address by encouraging them to upload confidential data to the cloud for joint access.

Fraunhofer SIT informed the affected service providers many months ago. And although these weaknesses can be removed with very simple and well-known methods, such as sending an e-mail with an activation link, not all of them are convinced that there is a need for action. Dr. Markus Schneider, Deputy Director of Fraunhofer SIT: "Dropbox, HiDrive, SugarSync, Syncplicity and Wuala have reacted after receiving our information." Some of these providers are now using confirmation e-mails to avoid this weakness, a method that has been in use for quite some time now. Others have implemented other mechanisms. "We think it is important that users are informed about the existing problems," said Schneider. "Unfortunately, it is not possible to provide 100% protection against attacks, even if the affected services are avoided. It is therefore important that cloud storage services providers remove such weaknesses, as this helps to protect users more effectively."

Consumers who use the affected services should be careful. Those who receive a request to download data from the cloud or upload data to it should send an e-mail to the supposed requestor to verify whether the request was really sent by them.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Fraunhofer SIT, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, July 15, 2012

Quantum computers could help search engines keep up with the Internet's growth

ScienceDaily (June 12, 2012) — Most people don't think twice about how Internet search engines work. You type in a word or phrase, hit enter, and poof -- a list of web pages pops up, organized by relevance.

Behind the scenes, a lot of math goes into figuring out exactly what qualifies as most relevant web page for your search. Google, for example, uses a page ranking algorithm that is rumored to be the largest numerical calculation carried out anywhere in the world. With the web constantly expanding, researchers at USC have proposed -- and demonstrated the feasibility -- of using quantum computers to speed up that process.

"This work is about trying to speed up the way we search on the web," said Daniel Lidar, corresponding author of a paper on the research that appeared in the journal Physical Review Letters on June 4.

As the Internet continues to grow, the time and resources needed to run the calculation -- which is done daily -- grow with it, Lidar said.

Lidar, who holds appointments at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences, worked with colleagues Paolo Zanardi of USC Dornsife and first author Silvano Garnerone, formerly a postdoctoral researcher at USC and now of the University of Waterloo, to see whether quantum computing could be used to run the Google algorithm faster.

As opposed to traditional computer bits, which can encode distinctly either a one or a zero, quantum computers use quantum bits or "qubits," which can encode a one and a zero at the same time. This property, called superposition, some day will allow quantum computers to perform certain calculations much faster than traditional computers.

Currently, there isn't a quantum computer in the world anywhere near large enough to run Google's page ranking algorithm for the entire web. To simulate how a quantum computer might perform, the researchers generated models of the web that simulated a few thousand web pages.

The simulation showed that a quantum computer could, in principle, return the ranking of the most important pages in the web faster than traditional computers, and that this quantum speedup would improve the more pages needed to be ranked. Further, the researchers showed that to simply determine whether the web's page rankings should be updated, a quantum computer would be able to spit out a yes-or-no answer exponentially faster than a traditional computer.

This research was funded by number of sources, including the National Science Foundation, the NASA Ames Research Center, the Lockheed Martin Corporation University Research Initiative program, and a Google faculty research award to Lidar.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Southern California.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Silvano Garnerone, Paolo Zanardi, Daniel Lidar. Adiabatic Quantum Algorithm for Search Engine Ranking. Physical Review Letters, 2012; 108 (23) DOI: 10.1103/PhysRevLett.108.230506

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, July 14, 2012

'No-sleep energy bugs' drain smartphone batteries

ScienceDaily (June 13, 2012) — Researchers have proposed a method to automatically detect a new class of software glitches in smartphones called "no-sleep energy bugs," which can entirely drain batteries while the phones are not in use.

"These energy bugs are a silent battery killer,"said Y. Charlie Hu, a Purdue University professor of electrical and computer engineering. "A fully charged phone battery can be drained in as little as five hours."

Because conserving battery power is critical for smartphones, the industry has adopted "an aggressive sleep policy," he said.

"What this means is that smartphones are always in a sleep mode, by default. When there are no active user interactions such as screen touches, every component, including the central processor, stays off unless an app instructs the operating system to keep it on."

Various background operations need to be performed while the phone is idle.

"For example, a mailer may need to automatically update email by checking with the remote server," Hu said.

To prevent the phone from going to sleep during such operations, smartphone manufacturers make application programming interfaces, or APIs, available to app developers. The developers insert the APIs into apps to instruct the phone to stay awake long enough to perform necessary operations.

"App developers have to explicitly juggle different power control APIs that are exported from the operating systems of the smartphones," Hu said. "Unfortunately, programmers are only human. They make mistakes when using these APIs, which leads to software bugs that mishandle power control, preventing the phone from engaging the sleep mode. As a result, the phone stays awake and drains the battery."

Findings are detailed in a research paper being presented during the 10th International Conference on Mobile Systems, Applications and Services, or MobiSys 2012, June 25-29 in the United Kingdom. The paper was written by doctoral students Abhinav Pathak and Abhilash Jindal, Hu, and Samuel Midkiff, a Purdue professor of electrical and computer engineering.

The researchers have completed the first systematic study of the no-sleep bugs and have proposed a method for automatically detecting them.

"We've had anecdotal evidence concerning these no-sleep energy bugs, but there has not been any systematic study of them until now," Midkiff said.

The researchers studied 187 Android applications that were found to contain Android's explicit power control APIs, called "wakelocks." Of the 187 apps, 42 were found to contain errors -- or bugs -- in their wakelock code. Findings showed the new tool accurately detected all 12 previously known instances of no-sleep energy bugs and found 30 new bugs in the apps.

The glitch has been found in interactive apps, such as phone applications and services for telephony on Android that must work even though the user isn't touching the phone.The app may fail to engage the sleep mode after the interactive session is completed.

Smartphone users, meanwhile, don't know that their phones have the bugs.

"You don't see any difference," Hu said. "You put it in your pocket and you think everything is fine. You take it out, and your battery is dead."

To detect bugs in the applications, the researchers modified a tool called a compiler, which translates code written in computer languages into the binary code that computers understand. The tool they developed adds new functionality to the compiler so that it can determine where no-sleep bugs might exist.

"The tool analyzes the binary code and automatically and accurately detects the presence of the no-sleep bugs," Midkiff said.

The Purdue researchers have coined the term "power-encumbered programming" to describe the smartphone energy bugs. Researchers concentrated on the Android smartphone, but the same types of bugs appear to affect other brands, Hu said.

The research has been funded in part by the National Science Foundation. Pathak is supported by an Intel Ph.D. fellowship.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Purdue University. The original article was written by Emil Venere.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, July 13, 2012

Browsing internet sites without the hurdles

ScienceDaily (June 25, 2012) — The majority of websites have major shortcomings. Unclean programming frequently causes excessive load times. Companies are only gradually recognizing the advantages of a barrier-free Internet. Fraunhofer researchers are crafting tools that can be used to monitor compliance with web standards.

For companies in Germany, web accessibility has never been a compelling issue until now -- this was also confirmed by a series of tests conducted in 2011 by the Fraunhofer Institute for Applied Information Technology FIT in Sankt Augustin. The scientists at the Web Compliance Center used their analysis tools to test the "web compliance" -- or adherence to international web standards -- among the Internet sites of German companies listed on the DAX. The outcome: 90 percent of the websites exhibited substantial flaws. For instance, important data could only be found after much effort, the websites took too long to load, or they were deficiently displayed on mobile devices. "'Web compliance' not only means optimizing websites so that they can be used by disabled and older persons," explains Dr. Carlos Velasco of the Web Compliance Center at FIT. "Search engines such as Google also have considerable problems with faulty sites. This may make the sites impossible to find or prevent them from ranking high in search requests. That is why this issue actually deserves a high priority."

Economic advantages through accessibility

An increasing number of companies have since realized that accessibility also comes with major economic advantages. Hewlett Packard Italia, Public-I Group and Polymedia, for example, are participating in the EU research project, "Inclusive Future-Internet Web Services (I2Web)." Coordinated by FIT, the project has a budget of EUR 2.7 million for a 2 and half years. The partners include the University of York (United Kingdom) and the University of Ljubljana (Slovenia), as well as the National Council for the Blind of Ireland and the Foundation for Assistive Technology (FAST). Participating companies offer Internet television, Video On Demand (VOD), online banking services and content management systems. These sites will soon be barrier-free.

Monitoring social networks for illegal activities

To enable site operators to monitor their sites efficiently, the FIT computer scientists had already developed the "imergo Web Compliance Suite" back in 2004. It is composed of a series of tools that can be integrated into content management systems. They review websites for adherence to certain rules, and these not only cover accessibility: for instance, one could monitor a social network such as Facebook for certain word groups that point to illegal activities. A company could also verify if the corporate design standards were being met on all their pages. "Typically, several content editors take care of large websites," says Velasco. "The suite tests whether the logo is located in the right spot on every page, for example."

The EU project "I2Web" launched in 2010 is a kind of progression from the "imergo Web Compliance Suite." The prototype contains, for instance, a development environment for an Expert Viewer. Not all accessibility guidelines can be checked automatically by a software program. For instance, photographs on a website should have a suitable alternative text. While a test tool can detect whether a text exists, it cannot determine if it also "suitably" describes what can be seen in the image. So the Expert Viewer offers a list of all relevant image texts that editors can review for the correctness of content. One important part of the EU project is conformity with interfaces, such as when customers wish to use Video On Demand or Internet TV on their televisions. "I2Web" ensures that the websites work seamlessly on all devices (if possible), and can be operated with complete accessibility.

Given the rapid pace of the Internet's evolution, the researchers at FIT will not soon run out of things to do: they will consistently have to adapt their tools to new browsers, the latest mobile devices and additional interfaces. But their work pays off: Open Text, a leading provider of content management systems, successfully markets the "imergo tools" as an additional option on its products.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Fraunhofer-Gesellschaft.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, July 12, 2012

Sharing data links in networks of cars

ScienceDaily (July 5, 2012) — A new algorithm lets networks of Wi-Fi-connected cars, whose layout is constantly changing, share a few expensive links to the Internet.

Wi-Fi is coming to our cars. Ford Motor Co. has been equipping cars with Wi-Fi transmitters since 2010; according to an Agence France-Presse story last year, the company expects that by 2015, 80 percent of the cars it sells in North America will have Wi-Fi built in. The same article cites a host of other manufacturers worldwide that either offer Wi-Fi in some high-end vehicles or belong to standards organizations that are trying to develop recommendations for automotive Wi-Fi.

Two Wi-Fi-equipped cars sitting at a stoplight could exchange information free of charge, but if they wanted to send that information to the Internet, they'd probably have to use a paid service such as the cell network or a satellite system. At the ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, taking place this month in Portugal, researchers from MIT, Georgetown University and the National University of Singapore (NUS) will present a new algorithm that would allow Wi-Fi-connected cars to share their Internet connections. "In this setting, we're assuming that Wi-Fi is cheap, but 3G is expensive," says Alejandro Cornejo, a graduate student in electrical engineering and computer science at MIT and lead author on the paper.

The general approach behind the algorithm is to aggregate data from hundreds of cars in just a small handful, which then upload it to the Internet. The problem, of course, is that the layout of a network of cars is constantly changing in unpredictable ways. Ideally, the aggregators would be those cars that come into contact with the largest number of other cars, but they can't be identified in advance.

Cornejo, Georgetown's Calvin Newport and NUS's Seth Gilbert -- all three of whom did or are doing their doctoral work in Nancy Lynch's group at MIT's Computer Science and Artificial Intelligence Laboratory -- began by considering the case in which every car in a fleet of cars will reliably come into contact with some fraction -- say, 1/x -- of the rest of the fleet in a fixed period of time. In the researchers' scheme, when two cars draw within range of each other, only one of them conveys data to the other; the selection of transmitter and receiver is random. "We flip a coin for it," Cornejo says.

Over time, however, "we bias the coin toss," Cornejo explains. "Cars that have already aggregated a lot will start 'winning' more and more, and you get this chain reaction. The more people you meet, the more likely it is that people will feed their data to you." The shift in probabilities is calculated relative to 1/x -- the fraction of the fleet that any one car will meet.

The smaller the value of x, the smaller the number of cars required to aggregate the data from the rest of the fleet. But for realistic assumptions about urban traffic patterns, Cornejo says, 1,000 cars could see their data aggregated by only about five.

Realistically, it's not a safe assumption that every car will come in contact with a consistent fraction of the others: A given car might end up collecting some other cars' data and then disappearing into a private garage. But the researchers were able to show that, if the network of cars can be envisioned as a series of dense clusters with only sparse connections between them, the algorithm will still work well.

Weirdly, however, the researchers' mathematical analysis shows that if the network is a series of dense clusters with slightly more connections between them, aggregation is impossible. "There's this paradox of connectivity where if you have these isolated clusters, which are well-connected, then we can guarantee that there will be aggregation in the clusters," Cornejo says. "But if the clusters are well connected, but they're not isolated, then we can show that it's impossible to aggregate. It's not only our algorithm that fails; you can't do it."

"In general, the ability to have cheap computers and cheap sensors means that we can generate a huge amount of data about our environment," says John Heidemann, a research professor at the University of Southern California's Information Sciences Institute. "Unfortunately, what's not cheap is communications."

Heidemann says that the real advantage of aggregation is that it enables the removal of redundancies in data collected by different sources, so that transmitting the data requires less bandwidth. Although Heidemann's research focuses on sensor networks, he suspects that networks of vehicles could partake of those advantages as well. "If you were trying to analyze vehicle traffic, there's probably 10,000 cars on the Los Angeles Freeway that know that there's a traffic jam. You don't need every one of them to tell you that," he says.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology. The original article was written by Larry Hardesty.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, July 11, 2012

Researchers advance biometric security

ScienceDaily (June 21, 2012) — Researchers in the Biometric Technologies Laboratory at the university have developed a way for security systems to combine different biometric measurements -- such as eye colour, face shape or fingerprints -- and create a learning system that simulates the brain in making decisions about information from different sources.

Professor Marina Gavrilova, the founding head of the lab -- among the first in the research community to introduce and study neural network based models for information fusion -- says they have developed a biometric security system that simulates learning patterns and cognitive processes of the brain.

"Our goal is to improve accuracy and as a result improve the recognition process," says Gavrilova. "We looked at it not just as a mathematical algorithm, but as an intelligent decision making process and the way a person will make a decision."

The algorithm can learn new biometric patterns and associate data from different data sets, allowing system to combine information, such as fingerprint, voice, gait or facial features, instead of relying on a single set of measurements.

The key is in the ability to combine features from multiple sources of information, prioritise them by identifying more important/prevalent features to learn and adapt the decision-making to changing conditions such as bad quality data samples, sensor errors or an absence of one of the biometrics.

"It's a kind of artificial intelligence application that can learn new things, patterns and features," Gavrilova says. With this new multi-dimensional approach, a security system can train itself to learn the most important features of any new data and incorporate it in the decision making process.

"The neural network allows a system to combine features from different biometrics in one, learn them to make the optimal decision about the most important features, and adapt to a different environment where the set of features changes. This is a different, more flexible approach."

Biometric information is becoming more common in our daily lives, being incorporated in drivers' licenses, passports and other forms of identification. Gavrilova says the work in her lab is not only pioneering the intelligent decision-making methodology for human recognition but is also important for maintaining security in virtual worlds and avatar recognition.

The research has been published in several journals, including Visual Computer and International Journal of Information Technology and Management, as well as being presented in 2011 at the CyberWorlds and International Conference on Cognitive Informatics & Cognitive Computing in Banff.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Calgary.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, July 4, 2012

Sensing technology: Motherboard monitoring inspired by the immune system

ScienceDaily (Jan. 22, 2012) — The prevalence of computer networks for sharing resources places increasingly high requirements on the reliability of data centres. The simplest way to diagnose abnormalities in these systems is to monitor the output of each component but this is not always effective.

Now Haruki Shida, Takeshi Okamoto and Yoshiteru Ishida at Toyohashi University of Technology have drawn inspiration from biological immune systems to develop a new model for detecting abnormal operation of network components more accurately.

Their model mimics biological immune systems where cells test each other to protect against disease. In the immunity-based diagnostic model, the sensors for the individual components are also linked for mutual testing. An algorithm determines the credibility of each sensor from comparisons of output from other sensors in the network.

The researchers tested the approach in a simulation of a mother board where they monitored the temperature, voltage and fan speed of the central processing unit and core. The immunity-based diagnostic model identified abnormal nodes more accurately than isolated sensors.

The researchers also developed a hybrid network combining isolated and immunity-based sensing. Here the immunity-based diagnostic model used a correlation-based network, which removes connections between sensors that have weakly correlated output. Compared with the fully connected network, the hybrid model further improved the accuracy of the tests.

The work will contribute to identifying abnormal component behaviour to avoid system failure.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Toyohashi University of Technology, via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Faster-than-fast Fourier transform

ScienceDaily (Jan. 18, 2012) — The Fourier transform is one of the most fundamental concepts in the information sciences. It's a method for representing an irregular signal -- such as the voltage fluctuations in the wire that connects an MP3 player to a loudspeaker -- as a combination of pure frequencies. It's universal in signal processing, but it can also be used to compress image and audio files, solve differential equations and price stock options, among other things.

The reason the Fourier transform is so prevalent is an algorithm called the fast Fourier transform (FFT), devised in the mid-1960s, which made it practical to calculate Fourier transforms on the fly. Ever since the FFT was proposed, however, people have wondered whether an even faster algorithm could be found.

At the Association for Computing Machinery's Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic -- a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.

Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers -- discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies.

"Weighted" means that some of those frequencies count more toward the total than others. Indeed, many of the frequencies may have such low weights that they can be safely disregarded. That's why the Fourier transform is useful for compression. An eight-by-eight block of pixels can be thought of as a 64-sample signal, and thus as the sum of 64 different frequencies. But as the researchers point out in their new paper, empirical studies show that on average, 57 of those frequencies can be discarded with minimal loss of image quality.

Heavyweight division

Signals whose Fourier transforms include a relatively small number of heavily weighted frequencies are called "sparse." The new algorithm determines the weights of a signal's most heavily weighted frequencies; the sparser the signal, the greater the speedup the algorithm provides. Indeed, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its entirety.

"In nature, most of the normal signals are sparse," says Dina Katabi, one of the developers of the new algorithm. Consider, for instance, a recording of a piece of chamber music: The composite signal consists of only a few instruments each playing only one note at a time. A recording, on the other hand, of all possible instruments each playing all possible notes at once wouldn't be sparse -- but neither would it be a signal that anyone cares about.

The new algorithm -- which associate professor Katabi and professor Piotr Indyk, both of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), developed together with their students Eric Price and Haitham Hassanieh -- relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight.

In signal processing, the basic tool for isolating particular frequencies is a filter. But filters tend to have blurry boundaries: One range of frequencies will pass through the filter more or less intact; frequencies just outside that range will be somewhat attenuated; frequencies outside that range will be attenuated still more; and so on, until you reach the frequencies that are filtered out almost perfectly.

If it so happens that the one frequency with a heavy weight is at the edge of the filter, however, it could end up so attenuated that it can't be identified. So the researchers' first contribution was to find a computationally efficient way to combine filters so that they overlap, ensuring that no frequencies inside the target range will be unduly attenuated, but that the boundaries between slices of spectrum are still fairly sharp.

Zeroing in

Once they've isolated a slice of spectrum, however, the researchers still have to identify the most heavily weighted frequency in that slice. In the SODA paper, they do this by repeatedly cutting the slice of spectrum into smaller pieces and keeping only those in which most of the signal power is concentrated. But in an as-yet-unpublished paper, they describe a much more efficient technique, which borrows a signal-processing strategy from 4G cellular networks. Frequencies are generally represented as up-and-down squiggles, but they can also be though of as oscillations; by sampling the same slice of bandwidth at different times, the researchers can determine where the dominant frequency is in its oscillatory cycle.

Two University of Michigan researchers -- Anna Gilbert, a professor of mathematics, and Martin Strauss, an associate professor of mathematics and of electrical engineering and computer science -- had previously proposed an algorithm that improved on the FFT for very sparse signals. "Some of the previous work, including my own with Anna Gilbert and so on, would improve upon the fast Fourier transform algorithm, but only if the sparsity k" -- the number of heavily weighted frequencies -- "was considerably smaller than the input size n," Strauss says. The MIT researchers' algorithm, however, "greatly expands the number of circumstances where one can beat the traditional FFT," Strauss says. "Even if that number k is starting to get close to n -- to all of them being important -- this algorithm still gives some improvement over FFT."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology. The original article was written by Larry Hardesty.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Haitham Hassanieh, Piotr Indyk, Dina Katabi, Eric Price. Nearly Optimal Sparse Fourier Transform. Arxiv, 2012 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, July 3, 2012

Sound rather than sight can activate 'seeing' for the blind, say researchers

ScienceDaily (Feb. 8, 2012) — Scientists at the Hebrew University of Jerusalem have tapped onto the visual cortex of the congenitally blind by using sensory substitution devices (SSDs), enabling the blind in effect to "see" and even describe objects.

SSDs are non-invasive sensory aids that provide visual information to the blind via their existing senses. For example, using a visual-to-auditory SSD in a clinical or everyday setting, users wear a miniature video camera connected to a small computer (or smart phone) and stereo headphones.

The images are converted into "soundscapes," using a predictable algorithm, allowing the user to listen to and then interpret the visual information coming from the camera.

Remarkably, proficient users who have had a dedicated (but relatively brief) training as part of a research protocol in he laboratory of Dr. Amir Amedi, of the Edmond and Lily Safra Center for Brain Sciences and the Institute for Medical Research Israel-Canada at the Hebrew University, are able to use SSDs to identify complex everyday objects, locate people and their postures, and read letters and words.

In addition to SSDs' clinical opportunities, using functional magnetic resonance imaging opens a unique window for studying the organization of the visual cortex without visual experience by studying the brain of congenitally blind individuals.

The results of the study in Amedi's lab, recently published in the journal Cerebral Cortex, are surprising. Not only can the sounds, which represent vision, activate the visual cortex of people who have never seen before, but they do so in a way organized according to the large-scale organization and segregation of the two visual processing streams.

For the past three decades, it has been known that visual processing is carried out in two parallel pathways. The ventral occipito-temporal "what" pathway, or the "ventral stream," has been linked with visual processing of form, object identity and color. Its counterpart is considered to be the dorsal occipito-parietal "where/how" pathway, or the "dorsal stream," which analyzes visuo-spatial information about object location and participates in visuo-motor planning.

Although this double dissociation between the processing of the two streams has been thoroughly validated, what remained unclear was the role of visual experience in shaping this functional architecture of the brain. Does this fundamental large-scale organizational principle depend on visual experience?

Using sensory substitution, the Hebrew University scientists, led by Ph.D. student Ella Striem-Amit and Dr. Amedi, discovered that the visual cortex of the blind shows a similar dorsal/ventral visual pathway division-of-labor when perceiving sounds that convey the relevant visual information; e.g., when the blind are requested to identify either the location or the shape of an SSD "image," they activate an area in the dorsal or in the ventral streams, respectively.

This shows that the most important large-scale organization of the visual system into the two streams can develop at least to some extent even without any visual experience, suggesting instead that this division-of-labor is not at all visual in its nature.

Recent research from Amedi's lab and from other research groups have demonstrated that multiple brain areas are not specific to their input sense (vision, audition or touch), but rather to the task or computation they perform, which may be computed with various modalities.

Extending these finding to a large-scale division-of-labor of the visual system further contributes crucial information towards postulating that the whole brain may be task-specific rather than dependent on a specific sensory input. "The brain is not a sensory machine, although it often looks like one; it is a task machine," summed up Amedi.

These findings suggest that the blind brain can potentially be "awakened" to processing visual properties and tasks, even after lifelong blindness, with the aid of visual rehabilitation, using future medical advances, such as retinal prostheses, say the researchers. A summary of these ideas were published recently in a review in Current Opinion in Neurology by Lior Reich and Shachar Maidenbaum from Amedi's lab.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Hebrew University of Jerusalem, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

E. Striem-Amit, O. Dakwar, L. Reich, A. Amedi. The large-Scale Organization of 'Visual' Streams Emerges Without Visual Experience. Cerebral Cortex, 2011; DOI: 10.1093/cercor/bhr253

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, July 1, 2012

Grading the online dating industry

ScienceDaily (Feb. 6, 2012) — The report card is in, and the online dating industry won't be putting this one on the fridge. A new scientific report concludes that although online dating offers users some very real benefits, it falls far short of its potential.

Unheard of just twenty years ago, online dating is now a billion dollar industry and one of the most common ways for singles to meet potential partners. Many websites claim that they can help you find your "soulmate." But do these online dating services live up to all the hype?

Not exactly, according to an article to be published in a forthcoming issue of Psychological Science in the Public Interest, a journal of the Association for Psychological Science.

In the article, a team of psychological scientists aims to get at the truth behind online dating, identifying the ways in which online dating may benefit or undermine singles' romantic outcomes.

Lead author Eli Finkel, Associate Professor of Social Psychology at Northwestern University, recognizes that "online dating is a marvelous addition to the ways in which singles can meet potential romantic partners," but he warns that "users need to be aware of its many pitfalls."

Many online dating sites claim that they possess an exclusive formula, a so-called "matching algorithm," that can match singles with partners who are especially compatible with them. But, after systematically reviewing the evidence, the authors conclude that such claims are unsubstantiated and likely false.

"To date, there is no compelling evidence that any online dating matching algorithm actually works," Finkel observes. "If dating sites want to claim that their matching algorithm is scientifically valid, they need to adhere to the standards of science, which is something they have uniformly failed to do. In fact, our report concludes that it is unlikely that their algorithms can work, even in principle, given the limitations of the sorts of matching procedures that these sites use."

The authors suggest that the existing matching algorithms neglect the most important insights from the flourishing discipline of relationship science. The algorithms seek to predict long-term romantic compatibility from characteristics of the two partners before they meet. Yet the strongest predictors of relationship well-being, such as a couple's interaction style and ability to navigate stressful circumstances, cannot be assessed with such data.

According to Finkel, "developers of matching algorithms have tended to focus on the information that is easy for them to assess, like similarity in personality and attitudes, rather than the information that relationship science has found to be crucial for predicting long-term relationship well-being. As a result, these algorithms are unlikely to be effective."

Many online dating sites market their ability to offer online daters access to a huge number of potential partners. However, online profiles are a feeble substitute for face-to-face contact when it comes to the crucial task of assessing romantic chemistry. Furthermore, browsing through all those online profiles may overwhelm people or encourage them to treat their search more like shopping than mate-finding, which can lead singles to pass over potential partners who are actually well-suited to them.

Finkel and his co-authors conclude that online dating is successful insofar as it rapidly helps singles meet potential partners in person, so that they can discover whether a romantic spark is there. The chats and messages people send through online dating sites may even help them to convey a positive initial impression, as long as people meet face-to-face relatively quickly.

Given the potentially serious consequences of intervening in people's romantic lives, the authors hope that this report will push proprietors to build a more rigorous scientific foundation for online dating services. In a preface to the report, psychological scientist Arthur Aron at the State University of New York at Stony Brook recommends the creation of a panel that would grade the scientific credibility of each online dating site.

"Thus far, the industry certainly does not get an A for effort," noted Finkel. "For years, the online dating industry has ignored actual relationship science in favor of unsubstantiated claims and buzzwords, like 'matching algorithms,' that merely sound scientific."

He added, "In the comments section of the report card, I would write: 'apply yourself!'"

Finkel co-authored this report with Paul Eastwick, assistant professor of psychology at Texas A&M University; Benjamin Karney, professor of psychology at the University of California, Los Angeles; Harry Reis, professor of psychology at the University of Rochester; and Susan Sprecher, professor of sociology and psychology at Illinois State University.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Association for Psychological Science.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Eli Finkel et al. Online Dating: A Critical Analysis From the Perspective of Psychological Science. Psychological Science in the Public Interest, 2012

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here