Google Search

Friday, August 31, 2012

Frog calls inspire a new algorithm for wireless networks

ScienceDaily (July 17, 2012) — Males of the Japanese tree frog have learnt not to use their calls at the same time so that the females can distinguish between them. Scientists at the Polytechnic University of Catalonia have used this form of calling behaviour to create an algorithm that assigns colours to network nodes -- an operation that can be applied to developing efficient wireless networks.

How can network nodes be coloured with the least possible number of colours without two consecutive nodes being the same colour? A team of researchers at the Polytechnic University of Catalonia have found a solution to this mathematical problem with the help of some rather special colleagues: Japanese tree frogs (Hyla japonica).

These male amphibians use their calls to attract the female, who can recognise where it comes from and then locate the suitor. The problem arises when two males are too close to one another and they use their call at the same time. The females become confused and are unable to determine the location of the call. Therefore, the males have had to learn how to 'desynchronise' their calls or, in other words, not call at the same time in order for a distinction to be made.

"Since there is no system of central control organising this "desynchronisation," the mechanism may be considered as an example of natural self-organisation," explains Christian Blum. With the help of his colleague Hugo Hernández, such behaviour provided inspiration for "solving the so-called 'graph colouring problem' in an even and distributed way."

A graph is a set of connected nodes. As in the case of the frog's 'desynchronised calls', operating in a 'distributed' fashion implies that there is no other way of central control that helps to solve the problem with a global vision and all the information on the situation.

In the same way, the researchers have devised a new algorithm for assigning colours to network nodes ensuring that each pair of connected nodes is not the same colour. The end goal is to generate a valid solution that uses the least amount of colours.

Application to WiFi connections

As Blum outlines, "this type of graph colouring is the formalisation of a problem that arises in many areas of the real world, such as the optimisation of modern wireless networks with no predetermined structure using techniques for reducing losses in information packages and energy efficiency improvement."

This study falls under the field of 'swarm intelligence', a branch of artificial intelligence that aims to design intelligent systems with multiple agents. This is inspired by the collective behaviour of animal societies such as ant colonies, flocks of birds, shoals of fish and frogs, as in this case.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Plataforma SINC, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Hugo Hernández, Christian Blum. Distributed graph coloring: an approach based on the calling behavior of Japanese tree frogs. Swarm Intelligence, 2012; 6 (2): 117 DOI: 10.1007/s11721-012-0067-2

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Robots will quickly recognize and respond to human gestures, with new algorithms

ScienceDaily (May 23, 2012) — New intelligent algorithms could help robots to quickly recognize and respond to human gestures. Researchers at A*STAR Institute for Infocomm Research in Singapore have created a computer program which recognizes human gestures quickly and accurately, and requires very little training.

Many works of science fiction have imagined robots that could interact directly with people to provide entertainment, services or even health care. Robotics is now at a stage where some of these ideas can be realized, but it remains difficult to make robots easy to operate.

One option is to train robots to recognize and respond to human gestures. In practice, however, this is difficult because a simple gesture such as waving a hand may appear very different between different people. Designers must develop intelligent computer algorithms that can be 'trained' to identify general patterns of motion and relate them correctly to individual commands.

Now, Rui Yan and co-workers at the A*STAR Institute for Infocomm Research in Singapore have adapted a cognitive memory model called a localist attractor network (LAN) to develop a new system that recognize gestures quickly and accurately, and requires very little training.

"Since many social robots will be operated by non-expert users, it is essential for them to be equipped with natural interfaces for interaction with humans," says Yan. "Gestures are an obvious, natural means of human communication. Our LAN gesture recognition system only requires a small amount of training data, and avoids tedious training processes."

Yan and co-workers tested their software by integrating it with ShapeTape, a special jacket that uses fibre optics and inertial sensors to monitor the bending and twisting of hands and arms. They programmed the ShapeTape to provide data 80 times per second on the three-dimensional orientation of shoulders, elbows and wrists, and applied velocity thresholds to detect when gestures were starting.

In tests, five different users wore the ShapeTape jacket and used it to control a virtual robot through simple arm motions that represented commands such as forward, backwards, faster or slower. The researchers found that 99.15% of gestures were correctly translated by their system. It is also easy to add new commands, by demonstrating a new control gesture just a few times.

The next step in improving the gesture recognition system is to allow humans to control robots without the need to wear any special devices. Yan and co-workers are tackling this problem by replacing the ShapeTape jacket with motion-sensitive cameras.

"Currently we are building a new gesture recognition system by incorporating our method with a Microsoft Kinect camera," says Yan. "We will implement the proposed system on an autonomous robot to test its usability in the context of a realistic service task, such as cleaning!"

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Rui Yan, Keng Peng Tee, Yuanwei Chua, Haizhou Li, Huajin Tang. Gesture Recognition Based on Localist Attractor Networks with Application to Robot Control [Application Notes]. IEEE Computational Intelligence Magazine, 2012; 7 (1): 64 DOI: 10.1109/MCI.2011.2176767

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, August 30, 2012

New statistical model lets patient's past forecast future ailments

ScienceDaily (June 4, 2012) — Analyzing medical records from thousands of patients, statisticians have devised a statistical model for predicting what other medical problems a patient might encounter.

Like how Netflix recommends movies and TV shows or how Amazon.com suggests products to buy, the algorithm makes predictions based on what a patient has already experienced as well as the experiences of other patients showing a similar medical history.

"This provides physicians with insights on what might be coming next for a patient, based on experiences of other patients. It also gives a predication that is interpretable by patients," said Tyler McCormick, an assistant professor of statistics and sociology at the University of Washington.

The algorithm will be published in an upcoming issue of the journal Annals of Applied Statistics. McCormick's co-authors are Cynthia Rudin, Massachusetts Institute of Technology, and David Madigan, Columbia University.

McCormick said that this is one of the first times that this type of predictive algorithm has been used in a medical setting. What differentiates his model from others, he said, is that it shares information across patients who have similar health problems. This allows for better predictions when details of a patient's medical history are sparse.

For example, new patients might lack a lengthy file listing ailments and drug prescriptions compiled from previous doctor visits. The algorithm can compare the patient's current health complaints with other patients who have a more extensive medical record that includes similar symptoms and the timing of when they arise. Then the algorithm can point to what medical conditions might come next for the new patient.

"We're looking at each sequence of symptoms to try to predict the rest of the sequence for a different patient," McCormick said. If a patient has already had dyspepsia and epigastric pain, for instance, heartburn might be next.

The algorithm can also accommodate situations where it's statistically difficult to predict a less common condition. For instance, most patients do not experience strokes, and accordingly most models could not predict one because they only factor in an individual patient's medical history with a stroke. But McCormick's model mines medical histories of patients who went on to have a stroke and uses that analysis to make a stroke prediction.

The statisticians used medical records obtained from a multiyear clinical drug trial involving tens of thousands of patients aged 40 and older. The records included other demographic details, such as gender and ethnicity, as well as patients' histories of medical complaints and prescription medications.

They found that of the 1,800 medical conditions in the dataset, most of them -- 1,400 -- occurred fewer than 10 times. McCormick and his co-authors had to come up with a statistical way to not overlook those 1,400 conditions, while alerting patients who might actually experience those rarer conditions.

They came up with a statistical modeling technique that is grounded in Bayesian methods, the backbone of many predictive algorithms. McCormick and his co-authors call their approach the Hierarchical Association Rule Model and are working toward making it available to patients and doctors.

"We hope that this model will provide a more patient-centered approach to medical care and to improve patient experiences," McCormick said.

The work was funded by a Google Ph.D. fellowship awarded to McCormick and by the National Science Foundation.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Washington, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Tyler H. McCormick, Cynthia Rudin and David Madigan. Bayesian Hierarchical Rule Modeling for Predicting Medical Conditions. Annals of Applied Statistics, 2012 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Paving the way to a scalable device for quantum information processing

ScienceDaily (July 24, 2012) — Researchers at NPL have demonstrated for the first time a monolithic 3D ion microtrap array which could be scaled up to handle several tens of ion-based quantum bits (qubits). The research, published in Nature Nanotechnology, shows how it is possible to realise this device embedded in a semiconductor chip, and demonstrates the device's ability to confine individual ions at the nanoscale.

As the UK's National Measurement Institute, NPL is interested in how exotic quantum states of matter can be used to make high precision measurements, of for example, time and frequency, ever more accurate. This research, however, has implications wider than measurement. The device could be used in quantum computation, where entangled qubits are used to execute powerful quantum algorithms. As an example, factorisation of large numbers by a quantum algorithm is dramatically faster than with a classical algorithm.

Scalable ion traps consisting of a 2D array of electrodes have been developed, however 3D trap geometries can provide a superior potential for confining the ions. Creating a successful scalable 3D ion trapping device is based on maintaining two qualities -- the ability to scale the device to accommodate increasing numbers of atomic particles, whilst preserving the trapping potential which enables precise control of ions at the atomic level. Previous research resulted in compromising at least one of these factors, largely due to limitations in the manufacturing processes.

The team at NPL has now produced the first monolithic ion microtrap array which uniquely combines a near ideal 3D geometry with a scalable fabrication process -- a breakthrough in this field. In terms of elementary operating characteristics, the microtrap chip outperforms all other scalable devices for ions.

Using a novel process based on conventional semiconductor fabrication technology, scientists developed the microtrap device from a silica-on-silicon wafer. The team were able to confine individual and strings of up to 14 ions in a single segment of the array. The fabrication process should enable device scaling to handle greatly increased numbers of ions, whilst retaining the ability to individually control each of them.

Due to the enormous progress in nanotechnology, the power of classical processor chips has been scaled up according to Moore's Law. Quantum processors are in their infancy, and the NPL device is a promising approach for advancing the scale of such chips for ion-based qubits.

Alastair Sinclair, Principal Scientist, NPL said: "We managed to produce an essential device or tool, which is critical for state of the art research and development in quantum technologies. This could be the basis of a future atomic clock device, with relevance for location, timing, navigation services or even the basis of a future quantum processor chip based on trapped ions, leading to a quantum computer and a quantum information network."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by National Physical Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Guido Wilpers, Patrick See, Patrick Gill, Alastair G. Sinclair. A monolithic array of three-dimensional ion traps fabricated with conventional semiconductor technology. Nature Nanotechnology, 2012; DOI: 10.1038/nnano.2012.126

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, August 28, 2012

Identifying dolphins with technology

ScienceDaily (July 31, 2012) — Dolphins all look pretty similar. So it can be problematic when your job requires you to identify individual dolphins in order to study their behavioral and ecological patterns. Photo-identification techniques -- recognizing a particular dolphin by the nicks, scars and notches on its dorsal fin -- are useful, but tedious.

"Researchers photograph dolphins in their natural surroundings and compare new dorsal fin photographs against a catalogue of previously identified dolphins," explains Kelly Debure, professor of computer science at Eckerd College in St. Petersburg, Florida. "These catalogs are often organized into categories based on either distinct fin shape or location of predominant damage. The manual photo-identification process, although effective, is extremely time consuming and visually stressful, particularly with large collections of known dolphins."

It was time to bring dolphin identification into the digital age.

Debure, along with Eckerd students, developed DARWIN, or Digital Analysis and Recognition of Whale Images on a Network, a computer program that simplifies photo-identification of bottlenose dolphins by applying computer vision and signal processing techniques to automate much of the tedious manual photo-id process.

"DARWIN is a software system which has been developed to support the creation of reliable and intuitive image database queries using fin outlines," she says. "It effectively performs registration of image data to compensate for the fact that the photographs are taken from different angles and distances and compares digital images of new dorsal fins with a database of previously identified fins."

The software uses an automated process to create a tracing of the fin outline, which is then used to formulate a sketch-based query of the database. The system utilizes a variety of image processing and computer vision algorithms to perform the matching process that identifies those previously cataloged fins which most closely resemble the unknown fin. The program ranks catalog fin images from "most like" to "least like" the new unknown fin image and presents images for side by side comparison.

DARWIN is used by researchers at several academic institutions and by Eckerd College's own Dolphin Project, a team of students who conduct population surveys of the bottlenose dolphin (Tursiops truncatus). Initiated in 1993, the project has trained dozens of students to take and analyze scientific data on dolphin populations to better understand their population dynamics and ecology in Tampa Bay. Such information can be used to help conserve dolphin populations.

The DARWIN software is free and available for download. Over 20 years, Debure and Eckerd students have continued to refine the software and are adapting the software's algorithms to make it appropriate for identification of other species.

"Although it was originally developed for use with bottlenose dolphins, it has been used by research groups on related species such as fin whales, indo-pacific humpback dolphins, spinner dolphins, and basking sharks," she says.

With this technological help, researchers can spend more time doing their job.

"Answering the question, "Which dolphin is that?" is not that scientifically interesting," says Debure. "With DARWIN, researchers can spend less time identifying animals and more time doing the real science."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Dick Jones Communications, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Quantum computers will be able to simulate particle collisions

ScienceDaily (June 1, 2012) — Quantum computers are still years away, but a trio of theorists has already figured out at least one talent they may have. According to the theorists, including one from the National Institute of Standards and Technology (NIST), physicists might one day use quantum computers to study the inner workings of the universe in ways that are far beyond the reach of even the most powerful conventional supercomputers.

Quantum computers require technology that may not be perfected for decades, but they hold great promise for solving complex problems. The switches in their processors will take advantage of quantum mechanics -- the laws that govern the interaction of subatomic particles. These laws allow quantum switches to exist in both on and off states simultaneously, so they will be able to consider all possible solutions to a problem at once.

This unique talent, far beyond the capability of today's computers, could enable quantum computers to solve some currently difficult problems quickly, such as breaking complex codes. But they could look at more challenging problems as well.

"We have this theoretical model of the quantum computer, and one of the big questions is, what physical processes that occur in nature can that model represent efficiently?" said Stephen Jordan, a theorist in NIST's Applied and Computational Mathematics Division. "Maybe particle collisions, maybe the early universe after the Big Bang? Can we use a quantum computer to simulate them and tell us what to expect?"

Questions like these involve tracking the interaction of many different elements, a situation that rapidly becomes too complicated for today's most powerful computers.

The team developed an algorithm -- a series of instructions that can be run repeatedly -- that could run on any functioning quantum computer, regardless of the specific technology that will eventually be used to build it. The algorithm would simulate all the possible interactions between two elementary particles colliding with each other, something that currently requires years of effort and a large accelerator to study.

Simulating these collisions is a very hard problem for today's digital computers because the quantum state of the colliding particles is very complex and, therefore, difficult to represent accurately with a feasible number of bits. The team's algorithm, however, encodes the information that describes this quantum state far more efficiently using an array of quantum switches, making the computation far more reasonable.

A substantial amount of the work on the algorithm was done at the California Institute of Technology, while Jordan was a postdoctoral fellow. His coauthors are fellow postdoc Keith S.M. Lee (now a postdoc at the University of Pittsburgh) and Caltech's John Preskill, the Richard P. Feynman Professor of Theoretical Physics.

The team used the principles of quantum mechanics to prove their algorithm can sum up the effects of the interactions between colliding particles well enough to generate the sort of data that an accelerator would provide.

"What's nice about the simulation is that you can raise the complexity of the problem by increasing the energy of the particles and collisions, but the difficulty of solving the problem does not increase so fast that it becomes unmanageable," Preskill says. "It means a quantum computer could handle it feasibly."

Though their algorithm only addresses one specific type of collision, the team speculates that their work could be used to explore the entire theoretical foundation on which fundamental physics rests.

"We believe this work could apply to the entire standard model of physics," Jordan says. "It could allow quantum computers to serve as a sort of wind tunnel for testing ideas that often require accelerators today."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by National Institute of Standards and Technology (NIST).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

S. P. Jordan, K. S. M. Lee, J. Preskill. Quantum Algorithms for Quantum Field Theories. Science, 2012; 336 (6085): 1130 DOI: 10.1126/science.1217069

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, August 27, 2012

Better security for web and mobile applications

ScienceDaily (July 20, 2012) — A team led by Harvard computer scientists, including two undergraduate students, has developed a new tool that could lead to increased security and enhanced performance for commonly used web and mobile applications.

Called RockSalt, the clever bit of code can verify that native computer programming languages comply with a particular security policy.

Presented at the ACM Conference on Programming Language Design and Implementation (PLDI) in Beijing, in June, RockSalt was created by Greg Morrisett, Allen B. Cutting Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS), two of his undergraduate students Edward Gan '13 and Joseph Tassarotti '13, former postdoctoral fellow Jean-Baptiste Tristan (now at Oracle), and Gang Tan of Lehigh University.

"When a user opens an external application, such as Gmail or Angry Birds, web browsers such as Google Chrome typically run the program's code in an intermediate and safer language such as JavaScript," says Morrisett. "In many cases it would be preferable to run native machine code directly."

The use of native code, especially in an online environment, however, opens up the door to hackers who can exploit vulnerabilities and readily gain access to other parts of a computer or device. An initial solution to this problem was offered over a decade ago by computer scientists at the University of California, Berkeley, who developed software fault isolation (SFI).

SFI forces native code to "behave" by rewriting machine code to limit itself to functions that fall within particular parameters. This "sandbox process" sets up a contained environment for running native code. A separate "checker" program can then ensure that the executable code adheres to regulations before running the program.

While considered a major breakthrough, the solution was limited to devices using RISC chips, a processor more common in research than in consumer computing. In 2006, Morrisett developed a way to implement SFI on the more popular CISC-based chips, like the Intel x86 processor. The technique was adopted widely. Google modified the routine for Google Chrome, eventually developing it into Google Native Client (or "NaCl").

When bugs and vulnerabilities were found in the checker for NaCl, Google sent out a call to arms. Morrissett once again took on the challenge, turning the problem into an opportunity for his students. The result was RockSalt, an improvement over NaCl, built using Coq, a proof development system.

"We built a simple but incredibly powerful system for proving a hypothesis -- so powerful that it's likely to be overlooked. We want to prove that if the checker says 'yes,' the code will indeed respect the sandbox security policy," says Joseph Tassarotti '13, who built and tested a model of the execution of x86 instructions. "We wanted to get a guarantee that there are no bugs in the checker, so we set out to construct a rigorous, machine-checked proof that the checker is correct."

"Our proofs about the correctness of our own tool say that if you run the tool on a program, and it says it's safe to run, then according to the model, this program can only do certain things," Tassarotti adds. "Our proof, however, was only as good as this model. If the model was wrong, then the tool could potentially have an error."

In other words, he explains, think of an analogy in physics. While you might mathematically prove that according to Newton's laws, a moving object will follow a certain trajectory, the proof is only meaningful to the degree that Newton's laws accurately model the world.

"Since the x86 architecture is very complicated, it was essential to test the model by running programs on a real chip, then simulating them with the model, and seeing whether the results matched. I specified the meanings of many of these instructions and developed the testing infrastructure to check for errors in the model," Tassarotti says.

Even more impressively, RockSalt comprises a mere 80 lines of code, as compared to the 600 lines of the original Google native code checker. The new checker is also faster, and, to date, no vulnerabilities have been uncovered. The tool offers tremendous advantages to programmers and users alike, allowing programmers to code in any language, compile it to native executable code, and secure it without going through intermediate languages such as JavaScript, and even to cross back and forth between Java and native code. This allows coders to choose the benefits of multiple languages, such as using one to ensure portability while using others to enhance performance.

"The biggest benefit may be that users can have more peace of mind that a piece of software works as they want it to," says Morrisett. "For users, the impact of such a tool is slightly more tangible; it allows users to safely run, for example, games, in a web browser without the painfully slow speeds that translated code traditionally provides."

Previous efforts to develop a robust, error-free checker have resulted in some success, but RockSalt has the potential to be scaled to software widely used by the general public. The researchers expect that their tool might end up being adopted and integrated into future versions of common web browsers. Morrisett and his team also have plans to adapt the tool for use in a broader variety of processors.

Reflecting on how the class project has been transformative, Tassarotti says, "I plan to pursue a Ph.D. in computer science, and I hope to work on projects like this that can improve the correctness of software. As computers are so prevalent now in fields like avionics and medical devices, I believe that this type of research is essential to ensure safety."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Harvard University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, August 26, 2012

Google goes cancer: Search engine algorithm finds cancer biomarkers

ScienceDaily (May 17, 2012) — The strategy used by Google to decide which pages are relevant for a search query can also be used to determine which proteins in a patient's cancer are relevant for the disease progression. Researchers from Dresden University of Technology, Germany, have used a modified version of Google's PageRank algorithm to rank about 20,000 proteins by their genetic relevance to the progression of pancreatic cancer. In their study, published in PLoS Computational Biology, they found seven proteins that can help to assess how aggressive a patient's tumor is and guide the clinician to decide if that patient should receive chemotherapy or not.

The researcher's own version of the Google algorithm has been used in this study to find new cancer biomarkers, which are molecules produced by cancer cells. Biomarkers can help to detect cancer earlier in body fluids or directly in the cancer tissue obtained in an operation or biopsy. Finding these biomarkers is often difficult and time consuming. Another problem is that markers found in different studies for the same types of cancer almost never overlap.

This problem has been circumvented using the Google strategy, which takes into account the content of a web page and also how these pages are connected via hyperlinks. With this strategy as the model, the authors made use of the fact that proteins in a cell are connected through a network of physical and regulatory interactions; the 'protein Facebook' so to speak.

"Once we added the network information in our analysis, our biomarkers became more reproducible," said Christof Winter, the paper's first author. Using this network information and the Google Algorithm, a significant overlap was found with an earlier study from the University of North Carolina. There, a connection was made with a protein which can assess aggressiveness in pancreatic cancer.

Although the new biomarkers seem to mark an improvement over currently used diagnostic tools, they are far from perfect and still need to be validated in a larger follow-up study before they can be used in clinical practice. It remains an open problem to turn these insights into novel drugs which slow down cancer progression. A first step in this direction is the group's cooperation with the Dresden-based biotech company RESprotect, who are running a clinical trial on a pancreas cancer drug.

TU Dresden is a leading German university, whose Center for Regenerative Therapies was awarded excellence status in the national excellence initiative. The work was a cooperation between the bioinformatics group of Prof. Dr. Michael Schroeder and the medical groups of Dr. Christian Pilarsky and Prof. Robert Grützmann.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Public Library of Science.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Christof Winter, Glen Kristiansen, Stephan Kersting, Janine Roy, Daniela Aust, Thomas Knösel, Petra Rümmele, Beatrix Jahnke, Vera Hentrich, Felix Rückert, Marco Niedergethmann, Wilko Weichert, Marcus Bahra, Hans J. Schlitt, Utz Settmacher, Helmut Friess, Markus Büchler, Hans-Detlev Saeger, Michael Schroeder, Christian Pilarsky, Robert Grützmann. Google Goes Cancer: Improving Outcome Prediction for Cancer Patients by Network-Based Ranking of Marker Genes. PLoS Computational Biology, 2012; 8 (5): e1002511 DOI: 10.1371/journal.pcbi.1002511

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, August 25, 2012

A simple way to help cities monitor traffic more accurately

ScienceDaily (Aug. 7, 2012) — Cities count the number of cars on the road in order to plan everything from the timing of stoplights to road repairs. But the in-road metal detectors that do the counting can make errors -- most often by registering that a car is present when one isn't.

One common error is called "splashover" because it usually involves an over-sensitive detector picking up the presence a vehicle in the next lane over -- as if the signal from the car "splashed over" into the adjacent lane.

Now Ohio State University researchers have developed software to help city managers easily identify detectors that are prone to splashover and reprogram them to get more accurate numbers.

Benjamin Coifman, associate professor of Civil, Environmental and Geodetic Engineering at Ohio State, and doctoral student Ho Lee describe the software in the October 2012 issue of the journal Transportation Research Part C: Emerging Technologies.

For the study, Coifman and Lee monitored 68 in-road detectors in Columbus, Ohio. They found six detectors that were prone to erroneously detecting cars in adjacent lanes. Error rates ranged from less than 1 percent to 52 percent.

"A host of city services rely on these data. We've known about splashover for decades, but up until now, nobody had an effective automatic test for finding it," said Coifman. "With this software, we can help transportation departments know which detectors to trust when deciding how they should put their limited dollars to work."

People may not be familiar with the commonly used loop detectors, which are often present at intersections to activate a stoplight. When the detectors are visible, they look like rectangular cutouts in the road surface, where underground wiring connects the detector to a traffic box at the side of the road. The same detectors are often present at freeway onramps and exits, to help cities monitor congestion.

To see how often splashover occurred in the 68 detectors in the study, the researchers went to the sites, and noted whether a car was truly present each time a detector counted a car. Then they used those data to construct computer algorithms that would automatically identify the patterns of error.

In tests, the software correctly identified four of the six detectors that exhibited splashover. The two it missed were sites with error rates less than 1 percent -- specifically 0.6 percent and 0.9 percent.

"We might not catch detectors in which one in 100 or one in 1,000 vehicles trigger splashover," Coifman said, "but for the detectors where the rate is one in 20, we'll catch it."

The discovery comes just as many American cities are moving toward the use of different technologies, such as roadside radar detectors, to monitor traffic.

"The world is moving away from loop detectors," Coifman added. "And the radar sensors that are replacing loop detectors are actually more prone to splashover-like errors."

These radar detectors bounce a signal off a car and measure the time it takes for the signal to return. Because the detectors are on the side of the road, small measurement errors often cause a single vehicle to be counted in two separate lanes by the radar.

The same algorithms they developed for loop detectors should work for radar detectors, Coifman said. The makers of radar detectors keep their software proprietary, so he can't readily test that hypothesis, though he points out that all of the details of the Ohio State algorithms are fully explained in the article, should radar makers wish to incorporate it into their products.

This study was facilitated by the Ohio Department of Transportation, and funded by NEXTRANS, the U.S. Department of Transportation Region V Regional University Transportation Center; and by the California PATH (Partners for Advanced Highways and Transit) Program of the University of California, in cooperation with the State of California Business, Transportation and Housing Agency, Department of Transportation.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Ohio State University. The original article was written by Pam Frost Gorder.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, August 24, 2012

Cyberwarfare, conservation and disease prevention could benefit from new network model

ScienceDaily (July 10, 2012) — Computer networks are the battlefields in cyberwarfare, as exemplified by the United States' recent use of computer viruses to attack Iran's nuclear program. A computer model developed at the University of Missouri could help military strategists devise the most damaging cyber attacks as well as guard America's critical infrastructure. The model also could benefit other projects involving interconnected groups, such as restoring ecosystems, halting disease epidemics and stopping smugglers.

"Our model allows users to identify the best or worst possible scenarios of network change," said Tim Matisziw, assistant professor of geography and engineering at MU. "The difficulty in evaluating a networks' resilience is that there are an infinite number of possibilities, which makes it easy to miss important scenarios. Previous studies focused on the destruction of large hubs in a network, but we found that in many cases the loss of smaller facilities can be just as damaging. Our model can suggest ways to have the maximum impact on a network with the minimum effort."

Limited resources can hinder law enforcement officers' ability to stop criminal organizations. Matisziw's model could help design plans which efficiently use a minimum of resources to cause the maximum disruption of trafficking networks and thereby reduce flows of drugs, weapons and exploited people. In a similar fashion, disease outbreaks could be mitigated by identifying and then blocking important links in their transmission, such as airports.

However, there are some networks that society needs to keep intact. After the breakdown of such a network, the model can be used to evaluate what could have made the disruption even worse and help officials prevent future problems. For example, after an electrical grid failure, such as the recent blackout in the eastern United States, future system failures could be pinpointed using the model. The critical weak points in the electrical grid could then be fortified before disaster strikes.

The model also can determine if a plan is likely to create the strongest network possible. For example, when construction projects pave over wetland ecosystems, the law requires that new wetlands be created. However, ecologists have noted that these new wetlands are often isolated from existing ecosystems and have little value to wildlife. Matisziw's model could help officials plan the best places for new wetlands so they connect with other natural areas and form wildlife corridors or stretches of wilderness that connect otherwise isolated areas and allow them to function as one ecosystem.

Matisziw's model was documented in the publicly available journal PLoS ONE. Making such a powerful tool widely available won't be a danger, Matisziw said. To use his model, a network must be understood in detail. Since terrorists and other criminals don't have access to enough data about the networks, they won't be able to use the model to develop doomsday scenarios.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Missouri-Columbia, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Timothy C. Matisziw, Tony H. Grubesic, Junyu Guo. Robustness Elasticity in Complex Networks. PLoS ONE, 2012; 7 (7): e39788 DOI: 10.1371/journal.pone.0039788

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, August 23, 2012

Skeleton key: Diverse complex networks have similar skeletons

ScienceDaily (June 1, 2012) — Northwestern University researchers are the first to discover that very different complex networks -- ranging from global air traffic to neural networks -- share very similar backbones. By stripping each network down to its essential nodes and links, they found each network possesses a skeleton and these skeletons share common features, much like vertebrates do.

Mammals have evolved to look very different despite a common underlying structure (think of a human being and a bat), and now it appears real-world complex networks evolve in a similar way.

The researchers studied a variety of biological, technological and social networks and found that all these networks have evolved according to basic growth mechanisms. The findings could be particularly useful in understanding how something -- a disease, a rumor or information -- spreads across a network.

This surprising discovery -- that networks all have skeletons and that they are similar -- was published this week by the journal Nature Communications.

"Infectious diseases such as H1N1 and SARS spread in a similar way, and it turns out the network's skeleton played an important role in shaping the global spread," said Dirk Brockmann, senior author of the paper. "Now, with this new understanding and by looking at the skeleton, we should be able to use this knowledge in the future to predict how a new outbreak might spread."

Brockmann is associate professor of engineering sciences and applied mathematics at the McCormick School of Engineering and Applied Science and a member of the Northwestern Institute on Complex Systems (NICO).

Complex systems -- such as the Internet, Facebook, the power grid, human consciousness, even a termite colony -- generate complex behavior. A system's structure emerges locally; it is not designed or planned. Components of a network work together, interacting and influencing each other, driving the network's evolution.

For years, researchers have been trying to determine if different networks from different disciplines have hidden core structures -- backbones -- and, if so, what they look like. Extracting meaningful structural features from data is one of the most challenging tasks in network theory.

Brockmann and two of his graduate students, Christian Thiemann and first author Daniel Grady, developed a method to identify a network's hidden core structure and showed that the skeletons possess some underlying and universal features.

The networks they studied differed in size (from hundreds of nodes to thousands) and in connectivity (some were sparsely connected, others dense) but a simple and similar core skeleton was found in each one.

"The key to our approach was asking what network elements are important from each node's perspective," Brockmann said. "What links are most important to each node, and what is the consensus among nodes? Interestingly, we found that an unexpected degree of consensus exists among all nodes in a network. Nodes either agree that a link is important or they agree that it isn't. There is nearly no disagreement."

By computing this consensus -- the overall strength, or importance, of each link in the network -- the researchers were able to produce a skeleton for each network consisting of all those links that every node considers important. And these skeletons are similar across networks.

Because of this "consensus" property, the researchers' method does not have the drawbacks of other methods, which have degrees of arbitrariness in them and depend on parameters. The Northwestern approach is very robust and identifies essential hubs and links in a non-arbitrary universal way.

The Volkswagen Foundation supported the research.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Northwestern University. The original article was written by Megan Fellman.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Daniel Grady, Christian Thiemann, Dirk Brockmann. Robust classification of salient links in complex networks. Nature Communications, 2012; 3: 864 DOI: 10.1038/ncomms1847

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, August 22, 2012

Security risk: Sensitive data can be harvested from a PC even if it is in standby mode, experts say

ScienceDaily (Aug. 10, 2012) — When you switch off your computer any passwords you used to login to web pages, your bank or other financial account evaporate into the digital ether, right? Not so fast! Researchers in Greece have discovered a security loophole that exploits the way computer memory works and could be used to harvest passwords and other sensitive data from a PC even if it is in standby mode.

Writing in a forthcoming issue of the International Journal of Electronic Security and Digital Forensics, Christos Georgiadis of the University of Macedonia in Thessaloniki and colleagues Stavroula Karayianni and Vasilios Katos at the Democritus University of Thrace in Xanthi explain how their discovery could be used by information specialists in forensic science for retrieving incriminating evidence from computers as well as exploited by criminals to obtain personal data and bank details.

The researchers point out that most computer users assume that switching off their machine removes any data held in random access memory (RAM), this type of fast memory is used by the computer to temporarily hold data currently used by a given application. RAM is often referred to as volatile memory, because anything contained in RAM is considered lost when a computer is switched off. Indeed, all data is lost from RAM when the power supply is disconnected; so it is volatile in this context.

However, Georgiadis and colleagues have now shown that data held in RAM is not lost if the computer is switched off but the mains electricity supply not interrupted. They suggest that forensics experts and criminals might thus be able to access data from the most recently used applications. They point out that starting a new memory-intensive application will overwrite data in RAM while a computer is being used, but simply powering off the machine leaves users vulnerable in terms of security and privacy.

"The need to capture and analyse the RAM contents of a suspect PC grows constantly as remote and distributed applications have become popular, and RAM is an important source of evidence," the team explains, as it can contain telltale traces of networks accessed and the unencrypted forms of passwords sent to login boxes and online forms.

The team tested their approach to retrieving data from RAM after a computer had been switched off following a general and common usage scenario involving accessing Facebook, Gmail, Microsoft Network (MSN) and Skype. They carried out RAM dumps immediately after switch off at 5, 15 and 60 minutes. They then used well-known forensic repair tools to piece together the various fragments of data retrieved from the memory dumps.

The team was able to reconstruct login details from the memory dumps for several popular services being used in the Firefox web browser including Google Mail (GMail), Facebook, Hotmail, and the WinRar file compression application. "We can conclude that volatile memory loses data under certain conditions and in a forensic investigation such memory can be a valuable source of evidence," the team says.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Christos Georgiadis, Stavroula Karayianni and Vasilios Katos. A framework for password harvesting from volatile memory. International Journal of Electronic Security and Digital Forensics, 2012

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, August 21, 2012

Cloud computing: Same weakness found in seven cloud storage services

ScienceDaily (June 29, 2012) — Cloud storage services allow registration using false e-mail addresses and Fraunhofer SIT sees the possibility for espionage and malware distribution.

Security experts at the Fraunhofer Institute for Secure Information Technology (SIT) in Darmstadt have discovered that numerous cloud storage service providers do not check the e-mail addresses provided during the registration process. This fact in combination with functions provided by these service providers, such as file sharing or integrated notifications, result in various possibilities for attacks.

For example, attackers can bring malware into circulation or spy out confidential data. As one of the supporters of the Center for Advanced Security Research Darmstadt (CASED), the Fraunhofer SIT scrutinized various cloud storage services. The testers discovered the same weakness with the free service offerings from CloudMe, Dropbox, HiDrive, IDrive, SugarSync, Syncplicity and Wuala. Scientists from Fraunhofer SIT presented their findings on the possible forms of attacks on June 26, 2012, at the 11th International Conference on Trust, Security and Privacy in Computing and Communications (IEEE TrustCom) in Liverpool.

Attackers do not require any programming knowledge whatsoever to exploit these weaknesses. All they need is to create an account using a false e-mail account. The attacker can then bring malware into circulation using another person's identity. With the services provided by Dropbox, IDrive, SugarSync, Syncplicity and Wuala, attackers can even spy on unsuspecting computer users with the help of the false e-mail address by encouraging them to upload confidential data to the cloud for joint access.

Fraunhofer SIT informed the affected service providers many months ago. And although these weaknesses can be removed with very simple and well-known methods, such as sending an e-mail with an activation link, not all of them are convinced that there is a need for action. Dr. Markus Schneider, Deputy Director of Fraunhofer SIT: "Dropbox, HiDrive, SugarSync, Syncplicity and Wuala have reacted after receiving our information." Some of these providers are now using confirmation e-mails to avoid this weakness, a method that has been in use for quite some time now. Others have implemented other mechanisms. "We think it is important that users are informed about the existing problems," said Schneider. "Unfortunately, it is not possible to provide 100% protection against attacks, even if the affected services are avoided. It is therefore important that cloud storage services providers remove such weaknesses, as this helps to protect users more effectively."

Consumers who use the affected services should be careful. Those who receive a request to download data from the cloud or upload data to it should send an e-mail to the supposed requestor to verify whether the request was really sent by them.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Fraunhofer SIT, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, August 20, 2012

Computer scientists present smile database

ScienceDaily (July 30, 2012) — What exactly happens to your face when you smile spontaneously, and how does that affect how old you look? Computer scientists from the University of Amsterdam's (UvA) Faculty of Science recorded the smiles of hundreds of visitors to the NEMO science centre in Amsterdam, thus creating the most comprehensive smile database ever. The results can be seen via the link below. The research was conducted as part of the project Science Live, sponsored by the Netherlands Organisation for Scientific Research (NOW) and the Royal Netherlands Academy of Arts and Sciences (KNAW).

481 test subjects participated in the research of Theo Gevers and Albert Ali Salah. The researchers made a video recording of a posed smile and a spontaneous smile for each participant. The subjects also were also asked to look angry, happy, sad, surprised and scared. Gevers and Salah analysed certain characteristics, such as how quickly the corners of the mouth turn upwards. This knowledge can be applied to computer software which guesses ages, recognise emotions and analyse human behaviour.

The researchers also asked the test subjects to look at images of other test subjects. They had to guess the age of those people and state how attractive they found them. They were also asked to judge character traits, such as whether the person is helpful by nature, or if that person is perhaps in love?

The data collected allowed the researchers to develop software that can estimate people's age. The software takes into account whether someone is happy, sad or angry, and adjusts its estimate accordingly. The software appears to be slightly better at estimating ages than humans. On average, humans' estimates are seven years off , while the computer is six years off on average .

The research of Gevers and Salah also shows that you look younger when you smile, but only if you are over forty. If you are under forty, you should look neutral if you want to come across younger.

Smile Database: http://www.uva-nemo.org/

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Universiteit van Amsterdam (UVA).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, August 19, 2012

Quantum computers move closer to reality, thanks to highly enriched and highly purified silicon

ScienceDaily (June 7, 2012) — The quantum computer is a futuristic machine that could operate at speeds even more mind-boggling than the world's fastest super-computers.

Research involving physicist Mike Thewalt of Simon Fraser University offers a new step towards making quantum computing a reality, through the unique properties of highly enriched and highly purified silicon.

Quantum computers right now exist pretty much in physicists' concepts, and theoretical research. There are some basic quantum computers in existence, but nobody yet can build a truly practical one -- or really knows how.

Such computers will harness the powers of atoms and sub-atomic particles (ions, photons, electrons) to perform memory and processing tasks, thanks to strange sub-atomic properties.

What Thewalt and colleagues at Oxford University and in Germany have found is that their special silicon allows processes to take place and be observed in a solid state that scientists used to think required a near-perfect vacuum.

And, using this 28Si they have extended to three minutes -- from a matter of seconds -- the time in which scientists can manipulate, observe and measure the processes.

"It's by far a record in solid-state systems," Thewalt says. "If you'd asked people a few years ago if this was possible, they'd have said no. It opens new ways of using solid-state semi-conductors such as silicon as a base for quantum computing.

"You can start to do things that people thought you could only do in a vacuum. What we have found, and what wasn't anticipated, are the sharp spectral lines (optical qualities) in the 28Silicon we have been testing. It's so pure, and so perfect. There's no other material like it."

But the world is still a long way from practical quantum computers, he notes.

Quantum computing is a concept that challenges everything we know or understand about today's computers.

Your desktop or laptop computer processes "bits" of information. The bit is a fundamental unit of information, seen by your computer has having a value of either "1" or "0."

That last paragraph, when written in Word, contains 181 characters including spaces. In your home computer, that simple paragraph is processed as a string of some 1,448 "1"s and "0"s.

But in the quantum computer, the "quantum bit" (also known as a "qubit") can be both a "1" and a "0" -- and all values between 0 and 1 -- at the same time.

Says Thewalt: "A classical 1/0 bit can be thought of as a person being either at the North or South Pole, whereas a qubit can be anywhere on the surface of the globe -- its actual state is described by two parameters similar to latitude and longitude."

Make a practical quantum computer with enough qubits available and it could complete in minutes calculations that would take today's super-computers years, and your laptop perhaps millions of years.

The work by Thewalt and his fellow researchers opens up yet another avenue of research and application that may, in time, lead to practical breakthroughs in quantum computing.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Simon Fraser University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

M. Steger, K. Saeedi, M. L. W. Thewalt, J. J. L. Morton, H. Riemann, N. V. Abrosimov, P. Becker, H.- J. Pohl. Quantum Information Storage for over 180 s Using Donor Spins in a 28Si 'Semiconductor Vacuum'. Science, 2012; 336 (6086): 1280 DOI: 10.1126/science.1217635

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Rooting out rumors, epidemics, and crime -- with math

ScienceDaily (Aug. 10, 2012) — A team of EPFL scientists has developed an algorithm that can identify the source of an epidemic or information circulating within a network, a method that could also be used to help with criminal investigations.

Investigators are well aware of how difficult it is to trace an unlawful act to its source. The job was arguably easier with old, Mafia-style criminal organizations, as their hierarchical structures more or less resembled predictable family trees.

In the Internet age, however, the networks used by organized criminals have changed. Innumerable nodes and connections escalate the complexity of these networks, making it ever more difficult to root out the guilty party. EPFL researcher Pedro Pinto of the Audiovisual Communications Laboratory and his colleagues have developed an algorithm that could become a valuable ally for investigators, criminal or otherwise, as long as a network is involved. The team's research was published August 10, 2012, in the journal Physical Review Letters.

Finding the source of a Facebook rumor

"Using our method, we can find the source of all kinds of things circulating in a network just by 'listening' to a limited number of members of that network," explains Pinto. Suppose you come across a rumor about yourself that has spread on Facebook and been sent to 500 people -- your friends, or even friends of your friends. How do you find the person who started the rumor? "By looking at the messages received by just 15-20 of your friends, and taking into account the time factor, our algorithm can trace the path of that information back and find the source," Pinto adds. This method can also be used to identify the origin of a spam message or a computer virus using only a limited number of sensors within the network.

Trace the propagation of an epidemic

Out in the real world, the algorithm can be employed to find the primary source of an infectious disease, such as cholera. "We tested our method with data on an epidemic in South Africa provided by EPFL professor Andrea Rinaldo's Ecohydrology Laboratory," says Pinto. "By modeling water networks, river networks, and human transport networks, we were able to find the spot where the first cases of infection appeared by monitoring only a small fraction of the villages."

The method would also be useful in responding to terrorist attacks, such as the 1995 sarin gas attack in the Tokyo subway, in which poisonous gas released in the city's subterranean tunnels killed 13 people and injured nearly 1,000 more. "Using this algorithm, it wouldn't be necessary to equip every station with detectors. A sample would be sufficient to rapidly identify the origin of the attack, and action could be taken before it spreads too far," says Pinto.

Identifying the brains behind a terrorist attack

Computer simulations of the telephone conversations that could have occurred during the terrorist attacks on September 11, 2001, were used to test Pinto's system. "By reconstructing the message exchange inside the 9/11 terrorist network extracted from publicly released news, our system spit out the names of three potential suspects -- one of whom was found to be the mastermind of the attacks, according to the official enquiry."

The validity of this method thus has been proven a posteriori. But according to Pinto, it could also be used preventatively -- for example, to understand an outbreak before it gets out of control. "By carefully selecting points in the network to test, we could more rapidly detect the spread of an epidemic," he points out. It could also be a valuable tool for advertisers who use viral marketing strategies by leveraging the Internet and social networks to reach customers. For example, this algorithm would allow them to identify the specific Internet blogs that are the most influential for their target audience and to understand how in these articles spread throughout the online community.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Ecole Polytechnique Fédérale de Lausanne.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Pedro C. Pinto, Patrick Thiran, and Martin Vetterli. Locating the Source of Diffusion in Large-Scale Networks. Phys. Rev. Lett., 10 August 2012 DOI: 10.1103/PhysRevLett.109.068702

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, August 18, 2012

Quantum computers could help search engines keep up with the Internet's growth

ScienceDaily (June 12, 2012) — Most people don't think twice about how Internet search engines work. You type in a word or phrase, hit enter, and poof -- a list of web pages pops up, organized by relevance.

Behind the scenes, a lot of math goes into figuring out exactly what qualifies as most relevant web page for your search. Google, for example, uses a page ranking algorithm that is rumored to be the largest numerical calculation carried out anywhere in the world. With the web constantly expanding, researchers at USC have proposed -- and demonstrated the feasibility -- of using quantum computers to speed up that process.

"This work is about trying to speed up the way we search on the web," said Daniel Lidar, corresponding author of a paper on the research that appeared in the journal Physical Review Letters on June 4.

As the Internet continues to grow, the time and resources needed to run the calculation -- which is done daily -- grow with it, Lidar said.

Lidar, who holds appointments at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences, worked with colleagues Paolo Zanardi of USC Dornsife and first author Silvano Garnerone, formerly a postdoctoral researcher at USC and now of the University of Waterloo, to see whether quantum computing could be used to run the Google algorithm faster.

As opposed to traditional computer bits, which can encode distinctly either a one or a zero, quantum computers use quantum bits or "qubits," which can encode a one and a zero at the same time. This property, called superposition, some day will allow quantum computers to perform certain calculations much faster than traditional computers.

Currently, there isn't a quantum computer in the world anywhere near large enough to run Google's page ranking algorithm for the entire web. To simulate how a quantum computer might perform, the researchers generated models of the web that simulated a few thousand web pages.

The simulation showed that a quantum computer could, in principle, return the ranking of the most important pages in the web faster than traditional computers, and that this quantum speedup would improve the more pages needed to be ranked. Further, the researchers showed that to simply determine whether the web's page rankings should be updated, a quantum computer would be able to spit out a yes-or-no answer exponentially faster than a traditional computer.

This research was funded by number of sources, including the National Science Foundation, the NASA Ames Research Center, the Lockheed Martin Corporation University Research Initiative program, and a Google faculty research award to Lidar.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Southern California.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Silvano Garnerone, Paolo Zanardi, Daniel Lidar. Adiabatic Quantum Algorithm for Search Engine Ranking. Physical Review Letters, 2012; 108 (23) DOI: 10.1103/PhysRevLett.108.230506

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, August 17, 2012

Sharing data links in networks of cars

ScienceDaily (July 5, 2012) — A new algorithm lets networks of Wi-Fi-connected cars, whose layout is constantly changing, share a few expensive links to the Internet.

Wi-Fi is coming to our cars. Ford Motor Co. has been equipping cars with Wi-Fi transmitters since 2010; according to an Agence France-Presse story last year, the company expects that by 2015, 80 percent of the cars it sells in North America will have Wi-Fi built in. The same article cites a host of other manufacturers worldwide that either offer Wi-Fi in some high-end vehicles or belong to standards organizations that are trying to develop recommendations for automotive Wi-Fi.

Two Wi-Fi-equipped cars sitting at a stoplight could exchange information free of charge, but if they wanted to send that information to the Internet, they'd probably have to use a paid service such as the cell network or a satellite system. At the ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, taking place this month in Portugal, researchers from MIT, Georgetown University and the National University of Singapore (NUS) will present a new algorithm that would allow Wi-Fi-connected cars to share their Internet connections. "In this setting, we're assuming that Wi-Fi is cheap, but 3G is expensive," says Alejandro Cornejo, a graduate student in electrical engineering and computer science at MIT and lead author on the paper.

The general approach behind the algorithm is to aggregate data from hundreds of cars in just a small handful, which then upload it to the Internet. The problem, of course, is that the layout of a network of cars is constantly changing in unpredictable ways. Ideally, the aggregators would be those cars that come into contact with the largest number of other cars, but they can't be identified in advance.

Cornejo, Georgetown's Calvin Newport and NUS's Seth Gilbert -- all three of whom did or are doing their doctoral work in Nancy Lynch's group at MIT's Computer Science and Artificial Intelligence Laboratory -- began by considering the case in which every car in a fleet of cars will reliably come into contact with some fraction -- say, 1/x -- of the rest of the fleet in a fixed period of time. In the researchers' scheme, when two cars draw within range of each other, only one of them conveys data to the other; the selection of transmitter and receiver is random. "We flip a coin for it," Cornejo says.

Over time, however, "we bias the coin toss," Cornejo explains. "Cars that have already aggregated a lot will start 'winning' more and more, and you get this chain reaction. The more people you meet, the more likely it is that people will feed their data to you." The shift in probabilities is calculated relative to 1/x -- the fraction of the fleet that any one car will meet.

The smaller the value of x, the smaller the number of cars required to aggregate the data from the rest of the fleet. But for realistic assumptions about urban traffic patterns, Cornejo says, 1,000 cars could see their data aggregated by only about five.

Realistically, it's not a safe assumption that every car will come in contact with a consistent fraction of the others: A given car might end up collecting some other cars' data and then disappearing into a private garage. But the researchers were able to show that, if the network of cars can be envisioned as a series of dense clusters with only sparse connections between them, the algorithm will still work well.

Weirdly, however, the researchers' mathematical analysis shows that if the network is a series of dense clusters with slightly more connections between them, aggregation is impossible. "There's this paradox of connectivity where if you have these isolated clusters, which are well-connected, then we can guarantee that there will be aggregation in the clusters," Cornejo says. "But if the clusters are well connected, but they're not isolated, then we can show that it's impossible to aggregate. It's not only our algorithm that fails; you can't do it."

"In general, the ability to have cheap computers and cheap sensors means that we can generate a huge amount of data about our environment," says John Heidemann, a research professor at the University of Southern California's Information Sciences Institute. "Unfortunately, what's not cheap is communications."

Heidemann says that the real advantage of aggregation is that it enables the removal of redundancies in data collected by different sources, so that transmitting the data requires less bandwidth. Although Heidemann's research focuses on sensor networks, he suspects that networks of vehicles could partake of those advantages as well. "If you were trying to analyze vehicle traffic, there's probably 10,000 cars on the Los Angeles Freeway that know that there's a traffic jam. You don't need every one of them to tell you that," he says.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology. The original article was written by Larry Hardesty.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, August 15, 2012

Like an orchestra without a conductor: Technology achieves synchronicity by itself

ScienceDaily (July 24, 2012) — Is it possible to sound all the church bells across the country at precisely the same time, without one central agent setting the rhythm? Indeed, it is. Future technologies, such as decentralized control mechanisms for motor vehicle traffic or robot swarms, will increasingly come to rely on the ability to function in a similarly synchronous manner. Researchers from Göttingen and Klagenfurt have now developed a new method of self-organising synchronisation and have delivered mathematical proof of the systems' guaranteed ability to achieve synchrony under their own power.

Orchestras, just like mobile telephone networks, rely on central agents that are responsible for the coordination effort. Both might experience errors: If the conductor or the mobile phone base stations are out of action for any reason, musicians and mobile phones come to a standstill. Self-organising systems offer a solution to this problem. Rather than transmitting mobile phone signals using base stations, the application allows individual mobile phones to forward signals to other mobile phones in their vicinity.

However, this collaboration of devices can only work, if their signals are synchronised. Matching models and computer simulations have been existing for many years. But only now has it been mathematically proven beyond any doubt, that such a system always synchronizes. The essential parts of the algorithm will be published in the New Journal of Physics at the end of July. Additionally, the authors, Johannes Klinglmayr and Christian Bettstetter of the Alpen-Adria-Universität, as well as Christoph Kirst and Marc Timme from the Max Planck Institute for Dynamics and Self-Organization in Göttingen, have filed a corresponding patent applications.

Using the example of church bells, Johannes Klinglmayr illustrates the method: "Imagine, that none of the sacristans has a watch, and there is no central location, that dictates the time. The set of rules that we have developed would nevertheless allow all the bells to chime simultaneously." Thanks to the newly developed method, an incremental adjustment of the time setting takes place at each location in reaction to the signals received. Consequently, after a number of chimes, all church bells sound in synchrony. The essential innovative aspect of this invention is that the method guarantees synchronicity, regardless of the time at the initial church watches. Synchrony is achieved even though some signals are not received or not heard. According to Marc Timme, the idea can be applied in a wide variety of technologies: „Groups of robots could work together to solve problems at distributed locations, making use of this ability to synchronize each other and thus coordinate."

The findings resulted from a collaboration between the Alpen-Adria-Universität Klagenfurt (Institute of Networked and Embedded Systems, Lakeside Labs) and the Max Planck Institute for Dynamics and Self-Organization Göttingen (Network Dynamics Group).

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Alpen-Adria-Universität Klagenfurt | Graz | Wien, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Johannes Klinglmayr, Christoph Kirst, Christian Bettstetter, Marc Timme. Guaranteeing global synchronization in networks with stochastic interactions. New Journal of Physics, 2012; 14 (7): 073031 DOI: 10.1088/1367-2630/14/7/073031

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Researchers advance biometric security

ScienceDaily (June 21, 2012) — Researchers in the Biometric Technologies Laboratory at the university have developed a way for security systems to combine different biometric measurements -- such as eye colour, face shape or fingerprints -- and create a learning system that simulates the brain in making decisions about information from different sources.

Professor Marina Gavrilova, the founding head of the lab -- among the first in the research community to introduce and study neural network based models for information fusion -- says they have developed a biometric security system that simulates learning patterns and cognitive processes of the brain.

"Our goal is to improve accuracy and as a result improve the recognition process," says Gavrilova. "We looked at it not just as a mathematical algorithm, but as an intelligent decision making process and the way a person will make a decision."

The algorithm can learn new biometric patterns and associate data from different data sets, allowing system to combine information, such as fingerprint, voice, gait or facial features, instead of relying on a single set of measurements.

The key is in the ability to combine features from multiple sources of information, prioritise them by identifying more important/prevalent features to learn and adapt the decision-making to changing conditions such as bad quality data samples, sensor errors or an absence of one of the biometrics.

"It's a kind of artificial intelligence application that can learn new things, patterns and features," Gavrilova says. With this new multi-dimensional approach, a security system can train itself to learn the most important features of any new data and incorporate it in the decision making process.

"The neural network allows a system to combine features from different biometrics in one, learn them to make the optimal decision about the most important features, and adapt to a different environment where the set of features changes. This is a different, more flexible approach."

Biometric information is becoming more common in our daily lives, being incorporated in drivers' licenses, passports and other forms of identification. Gavrilova says the work in her lab is not only pioneering the intelligent decision-making methodology for human recognition but is also important for maintaining security in virtual worlds and avatar recognition.

The research has been published in several journals, including Visual Computer and International Journal of Information Technology and Management, as well as being presented in 2011 at the CyberWorlds and International Conference on Cognitive Informatics & Cognitive Computing in Banff.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Calgary.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, August 14, 2012

Robotic assistants may adapt to humans in the factory

ScienceDaily (June 12, 2012) — In today's manufacturing plants, the division of labor between humans and robots is quite clear: Large, automated robots are typically cordoned off in metal cages, manipulating heavy machinery and performing repetitive tasks, while humans work in less hazardous areas on jobs requiring finer detail.

But according to Julie Shah, the Boeing Career Development Assistant Professor of Aeronautics and Astronautics at MIT, the factory floor of the future may host humans and robots working side by side, each helping the other in common tasks. Shah envisions robotic assistants performing tasks that would otherwise hinder a human's efficiency, particularly in airplane manufacturing.

"If the robot can provide tools and materials so the person doesn't have to walk over to pick up parts and walk back to the plane, you can significantly reduce the idle time of the person," says Shah, who leads the Interactive Robotics Group in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "It's really hard to make robots do careful refinishing tasks that people do really well. But providing robotic assistants to do the non-value-added work can actually increase the productivity of the overall factory."

A robot working in isolation has to simply follow a set of preprogrammed instructions to perform a repetitive task. But working with humans is a different matter: For example, each mechanic working at the same station at an aircraft assembly plant may prefer to work differently -- and Shah says a robotic assistant would have to effortlessly adapt to an individual's particular style to be of any practical use.

Now Shah and her colleagues at MIT have devised an algorithm that enables a robot to quickly learn an individual's preference for a certain task, and adapt accordingly to help complete the task. The group is using the algorithm in simulations to train robots and humans to work together, and will present its findings at the Robotics: Science and Systems Conference in Sydney in July.

"It's an interesting machine-learning human-factors problem," Shah says. "Using this algorithm, we can significantly improve the robot's understanding of what the person's next likely actions are."

Taking wing

As a test case, Shah's team looked at spar assembly, a process of building the main structural element of an aircraft's wing. In the typical manufacturing process, two pieces of the wing are aligned. Once in place, a mechanic applies sealant to predrilled holes, hammers bolts into the holes to secure the two pieces, then wipes away excess sealant. The entire process can be highly individualized: For example, one mechanic may choose to apply sealant to every hole before hammering in bolts, while another may like to completely finish one hole before moving on to the next. The only constraint is the sealant, which dries within three minutes.

The researchers say robots such as FRIDA, designed by Swiss robotics company ABB, may be programmed to help in the spar-assembly process. FRIDA is a flexible robot with two arms capable of a wide range of motion that Shah says can be manipulated to either fasten bolts or paint sealant into holes, depending on a human's preferences.

To enable such a robot to anticipate a human's actions, the group first developed a computational model in the form of a decision tree. Each branch along the tree represents a choice that a mechanic may make -- for example, continue to hammer a bolt after applying sealant, or apply sealant to the next hole?

"If the robot places the bolt, how sure is it that the person will then hammer the bolt, or just wait for the robot to place the next bolt?" Shah says. "There are many branches."

Using the model, the group performed human experiments, training a laboratory robot to observe an individual's chain of preferences. Once the robot learned a person's preferred order of tasks, it then quickly adapted, either applying sealant or fastening a bolt according to a person's particular style of work.

Working side by side

Shah says in a real-life manufacturing setting, she envisions robots and humans undergoing an initial training session off the factory floor. Once the robot learns a person's work habits, its factory counterpart can be programmed to recognize that same person, and initialize the appropriate task plan. Shah adds that many workers in existing plants wear radio-frequency identification (RFID) tags -- a potential way for robots to identify individuals.

Steve Derby, associate professor and co-director of the Flexible Manufacturing Center at Rensselaer Polytechnic Institute, says the group's adaptive algorithm moves the field of robotics one step closer to true collaboration between humans and robots.

"The evolution of the robot itself has been way too slow on all fronts, whether on mechanical design, controls or programming interface," Derby says. "I think this paper is important -- it fits in with the whole spectrum of things that need to happen in getting people and robots to work next to each other."

Shah says robotic assistants may also be programmed to help in medical settings. For instance, a robot may be trained to monitor lengthy procedures in an operating room and anticipate a surgeon's needs, handing over scalpels and gauze, depending on a doctor's preference. While such a scenario may be years away, robots and humans may eventually work side by side, with the right algorithms.

"We have hardware, sensing, and can do manipulation and vision, but unless the robot really develops an almost seamless understanding of how it can help the person, the person's just going to get frustrated and say, 'Never mind, I'll just go pick up the piece myself,'" Shah says.

This research was supported in part by Boeing Research and Technology and conducted in collaboration with ABB.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology. The original article was written by Jennifer Chu.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New method to find novel connections from gene to gene, drug to drug and between scientists

ScienceDaily (July 24, 2012) — Researchers from Mount Sinai School of Medicine have developed a new computational method that will make it easier for scientists to identify and prioritize genes, drug targets, and strategies for repositioning drugs that are already on the market. By mining large datasets more simply and efficiently, researchers will be able to better understand gene-gene, protein-protein, and drug/side-effect interactions. The new algorithm will also help scientists identify fellow researchers with whom they can collaborate.

Led by Avi Ma'ayan, PhD, Assistant Professor of Pharmacology and Systems Therapeutics at Mount Sinai School of Medicine, and Neil Clark, PhD a postdoctoral fellow in the Ma'ayan laboratory, the team of investigators used the new algorithm to create 15 different types of gene-gene networks. They also discovered novel connections between drugs and side effects, and built a collaboration network that connected Mount Sinai investigators based on their past publications.

"The algorithm makes it simple to build networks from data," said Dr. Ma'ayan. "Once high dimensional and complex data is converted to networks, we can understand the data better and discover new and significant relationships, and focus on the important features of the data."

The group analyzed one million medical records of patients to build a network that connects commonly co-prescribed drugs, commonly co-occurring side effects, and the relationships between side effects and combinations of drugs. They found that reported side effects may not be caused by the drugs, but by a separate condition of the patient that may be unrelated to the drugs. They also looked at 53 cancer drugs and connected them to 32 severe side effects. When chemotherapy was combined with cancer drugs that work through cell signaling, there was a strong link to cardiovascular related adverse events. These findings can assist in post-marketing surveillance safety of approved drugs.

The approach is presented in two separate publications in the journals BMC Bioinformatics and BMC Systems Biology. The tools that implement the approach Genes2FANs and Sets2Networks can be found online at http://actin.pharm.mssm.edu/genes2FANs and http://www.maayanlab.net/S2N.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Mount Sinai Hospital / Mount Sinai School of Medicine, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Ruth Dannenfelser, Neil R Clark, Avi Ma'ayan. Genes2FANs: connecting genes through functional association networks. BMC Bioinformatics, 2012; 13 (1): 156 DOI: 10.1186/1471-2105-13-156

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here