Google Search

Saturday, June 30, 2012

Computer scientists show what makes movie lines memorable

ScienceDaily (May 8, 2012) — Whether it's a line from a movie, an advertising slogan or a politician's catchphrase, some statements take hold in people's minds better than others. But why?

Cornell researchers who applied computer analysis to a database of movie scripts think they may have found the secret of what makes a line memorable.

The study suggests that memorable lines use familiar sentence structure but incorporate distinctive words or phrases, and they make general statements that could apply elsewhere. The latter may explain why lines such as, "You're gonna need a bigger boat" or "These aren't the droids you're looking for" (accompanied by a hand gesture) have become standing jokes. You can use them in a different context and apply the line to your own situation.

While the analysis was based on movie quotes, it could have applications in marketing, politics, entertainment and social media, the researchers said.

"Using movie scripts allowed us to study just the language, without other factors. We needed a way of asking a question just about the language, and the movies make a very nice dataset," said graduate student Cristian Danescu-Niculescu-Mizil, first author of a paper to be presented at the 50th Annual Meeting of the Association for Computational Linguistics July 8-14 in Jeju, South Korea.

The study grows out of ongoing work on how ideas travel across networks.

"We've been looking at things like who talks to whom," said Jon Kleinberg, a professor of computer science who worked on the study, "but we hadn't explored how the language in which an idea was presented might have an effect."

To address that, they collaborated with Lillian Lee, a professor of computer science who specializes in computer processing of natural human language.

They obtained scripts from about 1,000 movies, and a database of memorable quotes from those movies from the Internet Movie Database. Each quote was paired with another from the movie's script, spoken by the same character in the same scene and about the same length, to eliminate every factor except the language itself. Obi-Wan Kenobi, for example, also said, "You don't need to see his identification," but you don't hear that a lot.

They asked a group of people who had not seen the movies to choose which quote in the pairs was most memorable. Two patterns emerged to identify the memorable choice: distinctiveness and generality.

Then the researchers programmed a computer with linguistic rules reflecting these concepts. A line will be less general if it contains third-person pronouns and definite articles (which refer to people, objects or events in the scene) and uses past tense (usually referring to something that happened previously in the story). Distinctive language can be identified by comparison with a database of news stories. The computer was able to choose the memorable quote an average of 64 percent of the time.

Later analysis also found subtle differences in sound and word choice: Memorable quotes use more sounds made in the front of the mouth, words with more syllables and fewer coordinating conjunctions.

In a further test, the researchers found that the same rules applied to popular advertising slogans.

Although teaching a computer how to write memorable dialogue is probably a long way off, applications might be developed to monitor the work of human writers and evaluate it in progress, Kleinberg suggested.

The researchers have set up a website where you can test your skill at identifying memorable movie quotes, and perhaps contribute some data to the research, at www.cs.cornell.edu/~cristian/memorability.html.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Cornell University, via Newswise. The original article was written by Bill Steele.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Quantum computer built inside a diamond

ScienceDaily (Apr. 4, 2012) — Diamonds are forever -- or, at least, the effects of this diamond on quantum computing may be. A team that includes scientists from USC has built a quantum computer in a diamond, the first of its kind to include protection against "decoherence" -- noise that prevents the computer from functioning properly.

The demonstration shows the viability of solid-state quantum computers, which -- unlike earlier gas- and liquid-state systems -- may represent the future of quantum computing because they can be easily scaled up in size. Current quantum computers are typically very small and -- though impressive -- cannot yet compete with the speed of larger, traditional computers.

The multinational team included USC Professor Daniel Lidar and USC postdoctoral researcher Zhihui Wang, as well as researchers from the Delft University of Technology in the Netherlands, Iowa State University and the University of California, Santa Barbara. Their findings will be published on April 5 in Nature.

The team's diamond quantum computer system featured two quantum bits (called "qubits"), made of subatomic particles.

As opposed to traditional computer bits, which can encode distinctly either a one or a zero, qubits can encode a one and a zero at the same time. This property, called superposition, along with the ability of quantum states to "tunnel" through energy barriers, will some day allow quantum computers to perform optimization calculations much faster than traditional computers.

Like all diamonds, the diamond used by the researchers has impurities -- things other than carbon. The more impurities in a diamond, the less attractive it is as a piece of jewelry, because it makes the crystal appear cloudy.

The team, however, utilized the impurities themselves.

A rogue nitrogen nucleus became the first qubit. In a second flaw sat an electron, which became the second qubit. (Though put more accurately, the "spin" of each of these subatomic particles was used as the qubit.)

Electrons are smaller than nuclei and perform computations much more quickly, but also fall victim more quickly to "decoherence." A qubit based on a nucleus, which is large, is much more stable but slower.

"A nucleus has a long decoherence time -- in the milliseconds. You can think of it as very sluggish," said Lidar, who holds a joint appointment with the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences.

Though solid-state computing systems have existed before, this was the first to incorporate decoherence protection -- using microwave pulses to continually switch the direction of the electron spin rotation.

"It's a little like time travel," Lidar said, because switching the direction of rotation time-reverses the inconsistencies in motion as the qubits move back to their original position.

The team was able to demonstrate that their diamond-encased system does indeed operate in a quantum fashion by seeing how closely it matched "Grover's algorithm."

The algorithm is not new -- Lov Grover of Bell Labs invented it in 1996 -- but it shows the promise of quantum computing.

The test is a search of an unsorted database, akin to being told to search for a name in a phone book when you've only been given the phone number.

Sometimes you'd miraculously find it on the first try, other times you might have to search through the entire book to find it. If you did the search countless times, on average, you'd find the name you were looking for after searching through half of the phone book.

Mathematically, this can be expressed by saying you'd find the correct choice in X/2 tries -- if X is the number of total choices you have to search through. So, with four choices total, you'll find the correct one after two tries on average.

A quantum computer, using the properties of superposition, can find the correct choice much more quickly. The mathematics behind it are complicated, but in practical terms, a quantum computer searching through an unsorted list of four choices will find the correct choice on the first try, every time.

Though not perfect, the new computer picked the correct choice on the first try about 95 percent of the time -- enough to demonstrate that it operates in a quantum fashion.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Southern California, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

T. van der Sar, Z. H. Wang, M. S. Blok, H. Bernien, T. H. Taminiau, D. M. Toyli, D. A. Lidar, D. D. Awschalom, R. Hanson, V. V. Dobrovitski. Decoherence-protected quantum gates for a hybrid solid-state spin register. Nature, 2012; 484 (7392): 82 DOI: 10.1038/nature10900

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, June 28, 2012

Robots will quickly recognize and respond to human gestures, with new algorithms

ScienceDaily (May 24, 2012) — New intelligent algorithms could help robots to quickly recognize and respond to human gestures. Researchers at A*STAR Institute for Infocomm Research in Singapore have created a computer program which recognizes human gestures quickly and accurately, and requires very little training.

Many works of science fiction have imagined robots that could interact directly with people to provide entertainment, services or even health care. Robotics is now at a stage where some of these ideas can be realized, but it remains difficult to make robots easy to operate.

One option is to train robots to recognize and respond to human gestures. In practice, however, this is difficult because a simple gesture such as waving a hand may appear very different between different people. Designers must develop intelligent computer algorithms that can be 'trained' to identify general patterns of motion and relate them correctly to individual commands.

Now, Rui Yan and co-workers at the A*STAR Institute for Infocomm Research in Singapore have adapted a cognitive memory model called a localist attractor network (LAN) to develop a new system that recognize gestures quickly and accurately, and requires very little training.

"Since many social robots will be operated by non-expert users, it is essential for them to be equipped with natural interfaces for interaction with humans," says Yan. "Gestures are an obvious, natural means of human communication. Our LAN gesture recognition system only requires a small amount of training data, and avoids tedious training processes."

Yan and co-workers tested their software by integrating it with ShapeTape, a special jacket that uses fibre optics and inertial sensors to monitor the bending and twisting of hands and arms. They programmed the ShapeTape to provide data 80 times per second on the three-dimensional orientation of shoulders, elbows and wrists, and applied velocity thresholds to detect when gestures were starting.

In tests, five different users wore the ShapeTape jacket and used it to control a virtual robot through simple arm motions that represented commands such as forward, backwards, faster or slower. The researchers found that 99.15% of gestures were correctly translated by their system. It is also easy to add new commands, by demonstrating a new control gesture just a few times.

The next step in improving the gesture recognition system is to allow humans to control robots without the need to wear any special devices. Yan and co-workers are tackling this problem by replacing the ShapeTape jacket with motion-sensitive cameras.

"Currently we are building a new gesture recognition system by incorporating our method with a Microsoft Kinect camera," says Yan. "We will implement the proposed system on an autonomous robot to test its usability in the context of a realistic service task, such as cleaning!"

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR), via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Rui Yan, Keng Peng Tee, Yuanwei Chua, Haizhou Li, Huajin Tang. Gesture Recognition Based on Localist Attractor Networks with Application to Robot Control [Application Notes]. IEEE Computational Intelligence Magazine, 2012; 7 (1): 64 DOI: 10.1109/MCI.2011.2176767

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, June 27, 2012

'Game-powered machine learning' opens door to Google for music

ScienceDaily (May 4, 2012) — Can a computer be taught to automatically label every song on the Internet using sets of examples provided by unpaid music fans? University of California, San Diego engineers have found that the answer is yes, and the results are as accurate as using paid music experts to provide the examples, saving considerable time and money. In results published in the April 24 issue of the Proceedings of the National Academy of Sciences, the researchers report that their solution, called "game-powered machine learning," would enable music lovers to search every song on the web well beyond popular hits, with a simple text search using key words like "funky" or "spooky electronica."

Searching for specific multimedia content, including music, is a challenge because of the need to use text to search images, video and audio. The researchers, led by Gert Lanckriet, a professor of electrical engineering at the UC San Diego Jacobs School of Engineering, hope to create a text-based multimedia search engine that will make it far easier to access the explosion of multimedia content online. That's because humans working round the clock labeling songs with descriptive text could never keep up with the volume of content being uploaded to the Internet. For example, YouTube users upload 60 hours of video content per minute, according to the company.

In Lanckriet's solution, computers study the examples of music that have been provided by the music fans and labeled in categories such as "romantic," "jazz," "saxophone," or "happy." The computer then analyzes waveforms of recorded songs in these categories looking for acoustic patterns common to each. It can then automatically label millions of songs by recognizing these patterns. Training computers in this way is referred to as machine learning. "Game-powered" refers to the millions of people who are already online that Lanckriet's team is enticing to provide the sets of examples by labeling music through a Facebook-based online game called Herd It (http://apps.facebook.com/herd-it).

"This is a very promising mechanism to address large-scale music search in the future," said Lanckriet, whose research earned him a spot on MIT Technology Review's list of the world's top young innovators in 2011.

Another significant finding in the paper is that the machine can use what it has learned to design new games that elicit the most effective training data from the humans in the loop. "The question is if you have only extracted a little bit of knowledge from people and you only have a rudimentary machine learning system, can the computer use that rudimentary version to determine the most effective next questions to ask the people?" said Lanckriet. "It's like a baby. You teach it a little bit and the baby comes back and asks more questions." For example, the machine may be great at recognizing the music patterns in rock music but struggle with jazz. In that case, it might ask for more examples of jazz music to study.

It's the active feedback loop that combines human knowledge about music and the scalability of automated music tagging through machine learning that makes "Google for music" a real possibility. Although human knowledge about music is essential to the process, Lanckriet's solution requires relatively little human effort to achieve great gains. Through the active feedback loop, the computer automatically creates new Herd It games to collect the specific human input it needs to most effectively improve the auto-tagging algorithms, said Lanckriet. The game goes well beyond the two primary methods of categorizing music used today: paying experts in music theory to analyze songs -- the method used by Internet radio sites like Pandora -- and collaborative filtering, which online book and music sellers now use to recommend products by comparing a buyer's past purchases with those of people who made similar choices.

Both methods are effective up to a point. But paid music experts are expensive and can't possibly keep up with the vast expanse of music available online. Pandora has just 900,000 songs in its catalog after 12 years in operation. Meanwhile, collaborative filtering only really works with books and music that are already popular and selling well.

The big picture: Personalized radio

Lanckriet foresees a time when -- thanks to this massive database of cataloged music -- cell phone sensors will track the activities and moods of individual cell phone users and use that data to provide a personalized radio service -- the kind that matches music to one's activity and mood, without repeating the same songs over and over again.

"What I would like long-term is just one single radio station that starts in the morning and it adapts to you throughout the day. By that I mean the user doesn't have to tell the system, "Hey, it's afternoon now, I prefer to listen to hip hop in the afternoon. The system knows because it has learned the cell phone user's preferences."

This kind of personalized cell phone radio can only be made possible if the cell phone has a large database of accurately labeled songs from which to choose. That's where efforts to develop a music search engine are ultimately heading. The first step is figuring out how to label all the music online well beyond the most popular hits. As Lanckriet's team demonstrated in PNAS, game-powered machine learning is making that a real possibility.

Lanckriet's research is funded by the National Science Foundation, National Institutes of Health, the Alfred P. Sloan Foundation, Google, Yahoo!, Qualcomm, IBM and eHarmony. You can watch a video about the research and Lanckriet's auto-tagging algorithms to learn more.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - San Diego.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

L. Barrington, D. Turnbull, G. Lanckriet. Game-powered machine learning. Proceedings of the National Academy of Sciences, 2012; 109 (17): 6411 DOI: 10.1073/pnas.1014748109

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Crucial advances in 'brain reading' demonstrated

ScienceDaily (Dec. 21, 2011) — At UCLA's Laboratory of Integrative Neuroimaging Technology, researchers use functional MRI brain scans to observe brain signal changes that take place during mental activity. They then employ computerized machine learning (ML) methods to study these patterns and identify the cognitive state -- or sometimes the thought process -- of human subjects. The technique is called "brain reading" or "brain decoding."

In a new study, the UCLA research team describes several crucial advances in this field, using fMRI and machine learning methods to perform "brain reading" on smokers experiencing nicotine cravings.

The research, presented last week at the Neural Information Processing Systems' Machine Learning and Interpretation in Neuroimaging workshop in Spain, was funded by the National Institute on Drug Abuse, which is interested in using these method to help people control drug cravings.

In this study on addiction and cravings, the team classified data taken from cigarette smokers who were scanned while watching videos meant to induce nicotine cravings. The aim was to understand in detail which regions of the brain and which neural networks are responsible for resisting nicotine addiction specifically, and cravings in general, said Dr. Ariana Anderson, a postdoctoral fellow in the Integrative Neuroimaging Technology lab and the study's lead author.

"We are interested in exploring the relationships between structure and function in the human brain, particularly as related to higher-level cognition, such as mental imagery," Anderson said. "The lab is engaged in the active exploration of modern data-analysis approaches, such as machine learning, with special attention to methods that reveal systems-level neural organization."

For the study, smokers sometimes watched videos meant to induce cravings, sometimes watched "neutral" videos and at sometimes watched no video at all. They were instructed to attempt to fight nicotine cravings when they arose.

The data from fMRI scans taken of the study participants was then analyzed. Traditional machine learning methods were augmented by Markov processes, which use past history to predict future states. By measuring the brain networks active over time during the scans, the resulting machine learning algorithms were able to anticipate changes in subjects' underlying neurocognitive structure, predicting with a high degree of accuracy (90 percent for some of the models tested) what they were watching and, as far as cravings were concerned, how they were reacting to what they viewed.

"We detected whether people were watching and resisting cravings, indulging in them, or watching videos that were unrelated to smoking or cravings," said Anderson, who completed her Ph.D. in statistics at UCLA. "Essentially, we were predicting and detecting what kind of videos people were watching and whether they were resisting their cravings."

In essence, the algorithm was able to complete or "predict" the subjects' mental states and thought processes in much the same way that Internet search engines or texting programs on cell phones anticipate and complete a sentence or request before the user is finished typing. And this machine learning method based on Markov processes demonstrated a large improvement in accuracy over traditional approaches, the researchers said.

Machine learning methods, in general, create a "decision layer" -- essentially a boundary separating the different classes one needs to distinguish. For example, values on one side of the boundary might indicate that a subject believes various test statements and, on the other, that a subject disbelieves these statements. Researchers have found they can detect these believe-disbelieve differences with high accuracy, in effect creating a lie detector. An innovation described in the new study is a means of making these boundaries interpretable by neuroscientists, rather than an often obscure boundary created by more traditional methods, like support vector machine learning.

"In our study, these boundaries are designed to reflect the contributed activity of a variety of brain sub-systems or networks whose functions are identifiable -- for example, a visual network, an emotional-regulation network or a conflict-monitoring network," said study co-author Mark S. Cohen, a professor of neurology, psychiatry and biobehavioral sciences at UCLA's Staglin Center for Cognitive Neuroscience and a researcher at the California NanoSystems Institute at UCLA.

"By projecting our problem of isolating specific networks associated with cravings into the domain of neurology, the technique does more than classify brain states -- it actually helps us to better understand the way the brain resists cravings," added Cohen, who also directs UCLA's Neuroengineering Training Program.

Remarkably, by placing this problem into neurological terms, the decoding process becomes significantly more reliable and accurate, the researchers said. This is especially significant, they said, because it is unusual to use prior outcomes and states in order to inform the machine learning algorithms, and it is particularly challenging in the brain because so much is unknown about how the brain works.

Machine learning typically involves two steps: a "training phase" in which the computer evaluates a set of known outcomes -- say, a bunch of trials in which a subject indicated belief or disbelief -- and a second, "prediction" phase in which the computer builds a boundary based on that knowledge.

In future research, the neuroscientists said, they will be using these machine learning methods in a biofeedback context, showing subjects real-time brain readouts to let them know when they are experiencing cravings and how intense those cravings are, in the hopes of training them to control and suppress those cravings.

But since this clearly changes the process and cognitive state for the subject, the researchers said, they may face special challenges in trying to decode a "moving target" and in separating the "training" phase from the "prediction" phase.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Los Angeles. The original article was written by Jennifer Marcus.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, June 25, 2012

Grid-based computing to fight neurological disease

ScienceDaily (Apr. 11, 2012) — Grid computing, long used by physicists and astronomers to crunch masses of data quickly and efficiently, is making the leap into the world of biomedicine. Supported by EU-funding, researchers have networked hundreds of computers to help find treatments for neurological diseases such as Alzheimer's. They are calling their system the 'Google for brain imaging.'

Through the Neugrid project, the pan-European grid computing infrastructure has opened up new channels of research into degenerative neurological disorders and other illnesses, while also holding the promise of quicker and more accurate clinical diagnoses of individual patients.

The infrastructure, set up with the support of EUR 2.8 million in funding from the European Commission, was developed over three years by researchers in seven countries. Their aim, primarily, was to give neuroscientists the ability to quickly and efficiently analyse 'Magnetic resonance imaging' (MRI) scans of the brains of patients suffering from Alzheimer's disease. But their work has also helped open the door to the use of grid computing for research into other neurological disorders, and many other areas of medicine.

'Neugrid was launched to address a very real need. Neurology departments in most hospitals do not have quick and easy access to sophisticated MRI analysis resources. They would have to send researchers to other labs every time they needed to process a scan. So we thought, why not bring the resources to the researchers rather than sending the researchers to the resources,' explains Giovanni Frisoni, a neurologist and the deputy scientific director of IRCCS Fatebenefratelli, the Italian National Centre for Alzheimer's and Mental Diseases, in Brescia.

Five years' work in two weeks The Neugrid team, led by David Manset from MaatG in France and Richard McClatchey from the University of the West of England in Bristol, laid the foundations for the grid infrastructure, starting with five distributed nodes of 100 cores (CPUs) each, interconnected with grid middleware and accessible via the internet with an easy-to-use web browser interface. To test the infrastructure, the team used datasets of images from the Alzheimer's Disease Neuroimaging Initiative in the United States, the largest public database of MRI scans of patients with Alzheimer's disease and a lesser condition termed 'Mild cognitive impairment'.

'In Neugrid we have been able to complete the largest computational challenge ever attempted in neuroscience: we extracted 6,500 MRI scans of patients with different degrees of cognitive impairment and analysed them in two weeks,' Dr. Frisoni, the lead researcher on the project, says, 'on an ordinary computer it would have taken five years!'.

Though Alzheimer's disease affects about half of all people aged 85 and older, its causes and progression remain poorly understood. Worldwide more than 35 million people suffer from Alzheimer's, a figure that is projected to rise to over 115 million by 2050 as the world's population ages.

Patients with early symptoms have difficulty recalling the names of people and places, remembering recent events and solving simple maths problems. As the brain degenerates, patients in advanced stages of the disease lose mental and physical functions and require round-the-clock care.

The analysis of MRI scans conducted as part of the Neugrid project should help researchers gain important insights into some of the big questions surrounding the disease such as which areas of the brain deteriorate first, what changes occur in the brain that can be identified as biomarkers for the disease and what sort of drugs might work to slow or prevent progression.

Neugrid built on research conducted by two prior EU-funded projects: Mammogrid, which set up a grid infrastructure to analyse mammography data, and AddNeuroMed, which sought biomarkers for Alzheimer's. The team are now continuing their work in a series of follow-up projects. An expanded grid and a new paradigm Neugrid for You (N4U), a direct continuation of Neugrid, will build upon the grid infrastructure, integrating it with 'High performance computing' (HPC) and cloud computing resources. Using EUR 3.5 million in European Commission funding, it will also expand the user services, algorithm pipelines and datasets to establish a virtual laboratory for neuroscientists.

'In Neugrid we built the grid infrastructure, addressing technical challenges such as the interoperability of core computing resources and ensuring the scalability of the architecture. In N4U we will focus on the user-facing side of the infrastructure, particularly the services and tools available to researchers,' Dr. Frisoni says. 'We want to try to make using the infrastructure for research as simple and easy as possible,' he continues, 'the learning curve should not be much more difficult than learning to use an iPhone!'

N4U will also expand the grid infrastructure from the initial five computing clusters through connections with CPU nodes at new sites, including 2,500 CPUs recently added in Paris in collaboration with the French Alternative Energies and Atomic Energy Commission (CEA), and in partnership with 'Enabling grids for e-science Biomed VO', a biomedical virtual organisation.

Another follow-up initiative, outGRID, will federate the Neugrid infrastructure, linking it with similar grid computing resources set up in the United States by the Laboratory of Neuro Imaging at the University of California, Los Angeles, and the CBRAIN brain imaging research platform developed by McGill University in Montreal, Canada. A workshop was recently held at the International Telecommunication Union, an agency of the United Nations, to foster this effort.

Dr. Frisoni is also the scientific coordinator of the DECIDE project, which will work on developing clinical diagnostic tools for doctors built upon the Neugrid grid infrastructure. 'There are a couple of important differences between using brain imaging datasets for research and for diagnosis,' he explains. 'Researchers compare many images to many others, whereas doctors are interested in comparing images from a single patient against a wider set of data to help diagnose a disease. On top of that, datasets used by researchers are anonymous, whereas images from a single patient are not and protecting patient data becomes an issue.'

The DECIDE project will address these questions in order to use the grid infrastructure to help doctors treat patients. Though the main focus of all these new projects is on using grid computing for neuroscience, Dr. Frisoni emphasises that the same infrastructure, architecture and technology could be used to enable new research -- and new, more efficient diagnostic tools -- in other fields of medicine. 'We are helping to lay the foundations for a new paradigm in grid-enabled medical research,' he says.

Neugrid received research funding under the European Union's Seventh Framework Programme (FP7).

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by CORDIS Features, formerly ICT Results, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, June 24, 2012

Harnessing the predictive power of virtual communities

ScienceDaily (Jan. 30, 2012) — Scientists have created a new algorithm to detect virtual communities, designed to match the needs of real-life social, biological or information networks detection better than with current attempts. The results of this study by Lovro Šubelj and his colleague Marko Bajec from the University of Ljubljana, Slovenia have just been published in The European Physical Journal B.

Communities are defined as systems of nodes interacting through links. So-called classical communities are defined by their internal level of link density. By contrast, link-pattern communities -- better suited to describe real-world phenomena -- are characterised by internal patterns of similar connectedness between their nodes.

The authors have created a model, referred to as a propagation-based algorithm, that can extract both link-density and link-pattern communities without any prior knowledge of the number of communities, unlike previous attempts at community detection. They first validated their algorithm on several synthetic benchmark networks and with random networks. The researchers subsequently tested it on ten real-life networks including social (members of a karate club), information (peer-to-peer file sharing) and biological (protein-protein interactions of a yeast) networks. By this, it was found that the proposed algorithm detected the real-life communities more accurately than existing state-of-the-art algorithms.

They concluded that real-life networks appear to be composed of link-pattern communities that are interwoven and overlap with classical link-density communities. Further work could focus on creating a generic model to understand the conditions, such as the low level of clustering, for link-pattern communities to emerge, compared to link-density communities. The model could also help to explain why such link-pattern communities call the existing interpretation of small-world phenomena (six degrees of separation between nodes) into question.

Applications include the prediction of future friendships in online social networks, analysis of interactions in biological systems that are hard to observe otherwise, and detection of duplicated code in software systems.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Springer Science+Business Media, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

L. Šubelj, M. Bajec. Ubiquitousness of link-density and link-pattern communities in real-world networks. The European Physical Journal B, 2012; 85 (1) DOI: 10.1140/epjb/e2011-20448-7

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, June 23, 2012

Artificial intelligence: Getting better at the age guessing game

ScienceDaily (Feb. 1, 2012) — The active learning algorithm is faster and more accurate in guessing the age of an individual than conventional algorithms.

Scientists are developing artificial intelligence solutions for image processing, which have applications in many areas including advertising, entertainment, education and healthcare. They have, for example, developed computer algorithms for facial age classification -- the automated assignment of individuals to predefined age groups based on their facial features as seen on video captures or still images.

Improving the accuracy of facial age classification, however, is not easy. A person can teach a computer to make better guesses by running its algorithm through a large database of facial images of which the age is known using sets of labeled images, but acquiring such a database can be both time-consuming and expensive. The process might even breach privacy in certain countries. Jian-Gang Wang at the A*STAR Institution for Infocomm Research and co-workers1 have now developed an algorithm called incremental bilateral two-dimensional linear discriminant analysis (IB2DLDA) that could overcome such problems.

The researchers designed IB2DLDA so that it actively 'learns'. The algorithm first processes a small pool of labeled images, and then iteratively selects the most informative samples from a large pool of unlabeled images to query the user, and the information is added to the training database. According to Wang, unlabeled images that are markedly different to the labeled samples are the most informative. The 'active learning' approach significantly improves the efficiency of the algorithm and reduces the number of samples that need to be labeled, and hence the time and effort required to program the computer.

Based on their new findings, the researchers hope that it will become easier to build facial age classification into intelligent machines. The technology could find use, for example, in digital signage where the machine determines the age group of the viewer and displays targeted advertisements designed for those age groups, or in interactive games where the machine automatically presents different games based on the players' age range. Wang adds, "A vending machine that can estimate the age of a buyer could be useful for products that involve age control, such as alcoholic drinks and cigarettes."

The researchers demonstrated that the active learning approach was much faster than random selection, and used only half the number of samples. The method is also suitable for handling problems with a large number of classes, and could one day be generalized to applications other than age estimation. "We are now planning to extend our method to other areas such as classifying human emotions and actions," says Wang.

The A*STAR-affiliated researchers contributing to this research are from the Institution for Infocomm Research.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR), via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Jian-Gang Wang, E Sung, Wei-Yun Yau. Active Learning for Solving the Incomplete Data Problem in Facial Age Classification by the Furthest Nearest-Neighbor Criterion. IEEE Transactions on Image Processing, 2011; 20 (7): 2049 DOI: 10.1109/TIP.2011.2106794

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, June 22, 2012

To combat identity theft, protect computer, experts say

ScienceDaily (Mar. 19, 2012) — Having a triple-threat combination of protective software on your computer greatly reduces your chances of identity theft, according to a study led by a Michigan State University criminologist.

In a survey of more than 600 people, the researchers found that computer users who were running antivirus, anti-adware and anti-spyware software were 50 percent less likely to have their credit card information stolen.

The study appears in the research journal Deviant Behavior.

"When you think about antivirus software protecting you, you might think about it keeping your files safe and not losing your music and photos," said Thomas Holt, MSU associate professor of criminal justice and lead researcher on the project. "The important thing we're finding here is that it's not just about protecting your files, but also about protecting you economically -- about reducing your chances of being a victim of identity theft."

Holt's co-investigator was Michael Turner, associate professor at the University of North Carolina-Charlotte.

According to the study, about 15 percent of respondents said they had experienced computer-related identity theft in the past year. Males were more likely to be victims, Holt said.

"We're not sure what this might be a consequence of," he said. "Is it that males are less careful about what they do online? Is it a difference in how they shop online or conduct online commerce?"

Those who engaged in "computer-related deviance" -- such as downloading pirated music or pornographic images -- were more likely to be victims of identity theft, the study found. This is a large risk for users because pirated movies and music may contain malware and place users at risk for harm.

But the most practical news for computer users was the combined protective factor of the antivirus, anti-spyware and anti-adware software, each of which has a different function for keeping a computer safe, Holt said.

Antivirus software detects and removes malicious software programs such as viruses and worms that can corrupt a computer, delete data and spread to other computers. Anti-spyware and anti-adware programs, meanwhile, are designed to protect against software that either self-installs without the user's knowledge or is installed by the user and enables information to be gathered covertly about a person's Internet use, passwords and so on.

"You have a much better chance of not getting your credit card number stolen if you have all three forms of protective software," Holt said.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Michigan State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Thomas J. Holt, Michael G. Turner. Examining Risks and Protective Factors of On-Line Identity Theft. Deviant Behavior, 2012; 33 (4): 308 DOI: 10.1080/01639625.2011.584050

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, June 21, 2012

Efficiency of multi-hop wireless networks boosted

ScienceDaily (Apr. 19, 2012) — Multi-hop wireless networks can provide data access for large and unconventional spaces, but they have long faced significant limits on the amount of data they can transmit. Now researchers from North Carolina State University have developed a more efficient data transmission approach that can boost the amount of data the networks can transmit by 20 to 80 percent.

"Our approach increases the average amount of data that can be transmitted within the network by at least 20 percent for networks with randomly placed nodes -- and up to 80 percent if the nodes are positioned in clusters within the network," says Dr. Rudra Dutta, an associate professor of computer science at NC State and co-author of a paper on the research. The approach also makes the network more energy efficient, which can extend the lifetime of the network if the nodes are battery-powered.

Multi-hop wireless networks utilize multiple wireless nodes to provide coverage to a large area by forwarding and receiving data wirelessly between the nodes. However, these networks have "hot spots" -- places in the network where multiple wireless transmissions can interfere with each other. This limits how quickly the network can transfer data, because the nodes have to take turns transmitting data at these congested points.

Data can be transmitted at low power over short distances, which limits the degree of interference with other nodes. But this approach means that the data may have to be transmitted through many nodes before reaching its final destination. Or, data can be transmitted at high power, which means the data can be sent further and more quickly -- but the powerful transmission may interfere with transmissions from many other nodes.

Dutta and Ph.D. student Parth Pathak developed an approach called centrality-based power control to address the problem. Their approach uses an algorithm that instructs each node in the network on how much power to use for each transmission depending on its final destination.

The algorithm optimizes system efficiency by determining when a powerful transmission is worth the added signal disruption, and when less powerful transmissions are needed.

The paper, "Centrality-based power control for hot-spot mitigation in multi-hop wireless networks," is published online by the journal Computer Communications, and is in press for a print version of an upcoming issue of the journal. Pathak is lead author. The research was supported in part by the U.S. Army Research Office.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by North Carolina State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Parth H. Pathak, Rudra Dutta. Centrality-based power control for hot-spot mitigation in multi-hop wireless networks. Computer Communications, 2012; DOI: 10.1016/j.comcom.2012.01.023

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, June 20, 2012

Robot reconnoiters uncharted terrain

ScienceDaily (Feb. 16, 2012) — Mobile robots have many uses. They serve as cleaners, carry out inspections and search for survivors of disasters. But often, there is no map to guide them through unknown territory. Researchers have now developed a mobile robot that can roam uncharted terrain and simultaneously map it -- all thanks to an algorithm toolbox.

Industrial robots have been a familiar sight in the workplace for many years. In automotive and household appliance manufacture, for example, they have proved highly reliable on production and assembly lines. But now a new generation of high-tech helpers is at hand: Mobile robots are being used in place of humans to explore hazardous and difficult-to-access environments such as buildings in danger of collapsing, caves, or ground that has been polluted by an industrial accident. Equipped with sensors and optical cameras, these robots can help rescue services search for victims in the wake of natural disasters, explosions or fires, and can measure concentrations of hazardous substances.

There's just one problem: Often there is no map to show them the location of obstacles and steer them along navigable routes. Yet such maps are critical to ensuring that the high-tech machines are able to make progress, either independently or guided by remote control. Researchers at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB in Karlsruhe have now developed a roaming land robot that autonomously reconnoiters and maps uncharted terrain. The robot uses special algorithms and multi-sensor data to carve a path through unknown territory.

"To be able to navigate independently, our mobile robot has to fulfill a number of requirements. It must be able to localize itself within its immediate surroundings, continuously recalculate its position as it makes its way through the danger area, and simultaneously refine the map it is generating," says graduate engineer Christian Frey of the IOSB. To make this possible, he and his team have developed an algorithm toolbox for the robot that runs on a built-in computer. The robot is additionally equipped with a variety of sensors. Odometry sensors measure wheel revolutions, inertial sensors compute accelerations, and distance-measuring sensors register clearance from walls, steps, trees and bushes, to name but a few potential obstacles. Cameras and laser scanners record the environment and assist in the mapping process. The algorithms read the various data supplied by the sensors and use them to determine the robot's precise location. The interplay of all these different elements concurrently produces a map, which is updated continuously. Experts call the process Simultaneous Localization and Mapping, or SLAM.

Mobile robots face an additional challenge: to find the optimal path that will enable them to complete each individual task. Depending on the situation, this may be the shortest and quickest route, or perhaps the most energy-efficient, i.e. the one that uses the least amount of gasoline. When planning a course, the high-tech helpers must take into account restrictions on mobility such as a limited turning circle, and must navigate around obstacles. And should the environment change, for example as a result of falling objects or earthquake aftershocks, a robot must register this and use its toolbox to recalculate its route.

"We made our toolbox modular, so it's not difficult to adapt the algorithms to suit different types of mobile robot or specific in- or outdoor application scenarios. For example, it doesn't matter what sensor set-up is used, or whether the robot has two- or four-wheel drive," says Frey. The software can be customized to meet the needs of individual users, with development work taking just a few months. Frey adds: "The toolbox is suitable for all sorts of situations, not only accident response scenarios. It can be installed in cleaning robots or lawnmowers, for example, and a further possible application would be in roaming robots used to patrol buildings or inspect gas pipelines for weak points." From March 6-10, the IOSB researchers will be demonstrating their mobile robot technology at the CeBIT trade fair.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Fraunhofer-Gesellschaft.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Website security: Spot a bot to stop a botnet

ScienceDaily (May 1, 2012) — Computer scientists in India have developed a two-pronged algorithm that can detect the presence of a botnet on a computer network and block its malicious activities before it causes too much harm. The team describes details of the system in a forthcoming issue of the International Journal of Wireless and Mobile Computing.

One of the most significant threats faced by computer networks is from "bots." A bot is simply a program that runs on a computer without the owner's knowledge and carries out any of a number of tasks over the network and the wider internet. It can run the same tasks, such as sending emails or accessing a specific page on the internet, at a much higher rate than would be possible if a person were to carry out the task. A collection of bots in a network, used for malicious purposes, is a botnet and while they are often organized and run by a so-called botmaster there are bots that are available for hire for malicious and criminal activity.

Bots might be illicitly installed on computers in the home, schools, businesses, government buildings and other installations. They are usually carried into a particular computer through a malicious link on the internet, in an email or when a contaminated external storage device, such as a USB drive is attached to a computer that has no malware protection software installed.

Botnets are known to have been used to send mass emails, spam, numbering in the hundreds of millions, if not billions of deliveries. They have also been used in corporate spying, international surveillance and for carrying out attacks known as Distributed Denial of Service (DDoS) attacks, which can decommission whole computer networks by accessing their servers repeatedly and so blocking legitimate users.

Manoj Thakur of the Veermata Jijabai Technological Institute (VJTI), in Mumbai, India, and colleagues have developed a novel approach to detecting and combating bots. Their technique uses a two-pronged strategy involving a standalone and a network algorithm. The standalone algorithm runs independently on each node of the network and monitors active processes on the node. If it detects suspicious activity, it triggers the network algorithm. The network algorithm then analyzes the information being transferred to and from the hosts on the network to deduce whether or not the activity is due to a bot or a legitimate program on the system.

The standalone algorithm is heuristic in nature, the team says, which means it can spot previously unseen bot activity, whereas the network algorithm relies on network traffic analysis to carry out its detection. The two techniques working together can thus spot activity from known and unknown bots. This approach also has the advantage of reducing the number of false positives.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience Publishers, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Manoj Thakur et al. Detection and prevention of botnets and malware in an enterprise network. Int. J. Wireless and Mobile Computing, 2012, 5, 144-153

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, June 19, 2012

Looking good on Facebook

ScienceDaily (Apr. 23, 2012) — A European study of students using online social networking shows that users tend to make new connections via their own more attractive friends regardless of whether they are male or female.

Writing in the International Journal of Web Based Communities, Christina Jaschinski and Piet Kommers of the University of Twente, The Netherlands, explain how they have carried out a preliminary study to try and understand better how relationships develop online. "Social network sites have become essential for managing relationships in today's life," they explain. "Therefore, it is increasingly important for scientists to understand how impressions are formed and connections develop in the virtual world."

The advent of the so-called "Web 2.0," the interactive and sharing version of the world wide web, which includes blogs, video and photo sharing sites, social bookmarking, social media, microblogging sites, such as twitter, and online social networks including Google+ and Facebook, has enabled the sharing, connecting and collaboration of people all over the world with very little technological friction. This has grown considerably especially since the marketing of relatively inexpensive smart phones and the growth of broadband internet connectivity.

People meet, connect and interact in these online communities by using a profile as a representation of their identity, the team explains. Unfortunately, the formation of impressions about a person online almost entirely lacks non-verbal cues other than those based on their profile photo and the status and content of their friends' profiles. Users can control their photo, but not how it is perceived but more importantly they cannot control the profiles of other users. This latter point is not so different from the social networks we form offline, where a newcomer may judge a person based on the company they keep. At least offline the person has the opportunity to project themselves in a more comprehensive manner and deflect such judgements by their own actions and behaviour rather than just that of their friends and associates.

The team recruited 78 students who use perhaps the most popular online social network, Facebook, apparently fast approaching 1 billion users, to investigate how the attractiveness of Facebook friends affects the impressions formed of friends of friends by users. The study simply involved mocking up Facebook profiles and asking the students to carry out a "hot or not" type assessment based purely on the visual appearance of the user's profile photo within the page. The team found that someone is considered more likeable and seen as a potential friend when they are associated with good-looking friends.

The findings could have implications not only for social scientists hoping to understand these new modes of interpersonal behaviour but might also be used by companies and other organisations hoping to benefit from social networking applications. Additionally, the extension of traditional networking for marketing and job hunting offered by the online world is increasingly important and users are more aware of how they are creating an impression, good or bad, in the online world.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Christina Jaschinski and Piet Kommers. Does beauty matter? The role of friends' attractiveness and gender on social attractiveness ratings of individuals on Facebook. International Journal of Web Based Communities, 2012 (in press)

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, June 18, 2012

Scientists develop biological computer to encrypt and decipher images

ScienceDaily (Feb. 7, 2012) — Scientists at The Scripps Research Institute in California and the Technion-Israel Institute of Technology have developed a "biological computer" made entirely from biomolecules that is capable of deciphering images encrypted on DNA chips. Although DNA has been used for encryption in the past, this is the first experimental demonstration of a molecular cryptosystem of images based on DNA computing.

The study was published in a recent online-before-print edition of the journal Angewandte Chemie.

Instead of using traditional computer hardware, a group led by Professor Ehud Keinan of Scripps Research and the Technion created a computing system using bio-molecules. When suitable software was applied to the biological computer, it could decrypt, separately, fluorescent images of The Scripps Research Institute and Technion logos.

A Union Between Biology and Computer Science

In explaining the work's union of the often-disparate fields of biology and computer science, Keinan notes that a computer is, by definition, a machine made of four components -- hardware, software, input, and output. Traditional computers have always been electronic, machines in which both input and output are electronic signals. The hardware is a complex composition of metallic and plastic components, wires, and transistors, and the software is a sequence of instructions given to the machine in the form of electronic signals.

"In contrast to electronic computers, there are computing machines in which all four components are nothing but molecules," Keinan said. "For example, all biological systems and even entire living organisms are such computers. Every one of us is a biomolecular computer, a machine in which all four components are molecules that 'talk' to one another logically."

The hardware and software in these devices, Keinan notes, are complex biological molecules that activate one another to carry out some predetermined chemical work. The input is a molecule that undergoes specific, predetermined changes, following a specific set of rules (software), and the output of this chemical computation process is another well-defined molecule.

"Building" a Biological Computer

When asked what a biological computer looks like, Keinan laughs.

"Well," he said, "it's not exactly photogenic." This computer is "built" by combining chemical components into a solution in a tube. Various small DNA molecules are mixed in solution with selected DNA enzymes and ATP. The latter is used as the energy source of the device.

"It's a clear solution -- you don't really see anything," Keinan said. "The molecules start interacting upon one another, and we step back and watch what happens." And by tinkering with the type of DNA and enzymes in the mix, scientists can fine-tune the process to a desired result.

"Our biological computing device is based on the 75-year-old design by the English mathematician, cryptanalyst, and computer scientist Alan Turing," Keinan said. "He was highly influential in the development of computer science, providing a formalization of the concepts of algorithm and computation, and he played a significant role in the creation of the modern computer. Turing showed convincingly that using this model you can do all the calculations in the world. The input of the Turing machine is a long tape containing a series of symbols and letters, which is reminiscent of a DNA string. A reading head runs from one letter to another, and on each station it does four actions: 1) reading the letter; 2) replacing that letter with another letter; 3) changing its internal state; and 4) moving to next position. A table of instructions, known as the transitional rules, or software, dictates these actions. Our device is based on the model of a finite state automaton, which is a simplified version of the Turing machine. "

Unique Biological Properties

Now that he has shown the viability of a biological computer, does Keinan hope that this model will compete with its electronic counterpart?

"The ever-increasing interest in biomolecular computing devices has not arisen from the hope that such machines could ever compete with electronic computers, which offer greater speed, fidelity, and power in traditional computing tasks," Keinan said. "The main advantages of biomolecular computing devices over electronic computers have to do with other properties."

As shown in this work, he continues, a wealth of information can be stored and encrypted in DNA molecules. Although each computing step is slower than the flow of electrons in an electronic computer, the fact that trillions of such chemical steps are done in parallel makes the entire computing process fast. "Considering the fact that current microarray technology allows for printing millions of pixels on a single chip, the numbers of possible images that can be encrypted on such chips is astronomically large," he said.

"Also, as shown in our previous work and other projects carried out in our lab, these devices can interact directly with biological systems and even with living organisms," Keinan explained. "No interface is required since all components of molecular computers, including hardware, software, input, and output, are molecules that interact in solution along a cascade of programmable chemical events." He adds that because of DNA's ability to store information, major computer companies have been extremely interested in the development of DNA-based computing systems.

The first author of the study, "A Molecular Cryptosystem for Images by DNA Computing," is graduate student Sivan Shoshani of Technion. In addition to Keinan and Shoshani, authors include postdoctoral fellow Ron Piran of Scripps Research and Yoav Arava of the Technion.

This work was supported by the National Science Foundation, the Israel-US Binational Science Foundation, and the Skaggs Institute for Chemical Biology, as well as graduate fellowships from the Irwin and Joan Jacobs Foundation, the Fine Foundation, the Russell Berrie Nanotechnology Institute, and the Israel Ministry of Science and Technology.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Scripps Research Institute.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Sivan Shoshani, Ron Piran, Yoav Arava, Ehud Keinan. A Molecular Cryptosystem for Images by DNA Computing. Angewandte Chemie International Edition, 2012; DOI: 10.1002/anie.201107156

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, June 17, 2012

Identical DNA codes discovered in different plant species

ScienceDaily (Apr. 9, 2012) — Analyzing massive amounts of data officially became a national priority recently when the White House Office of Science and Technology Policy announced the Big Data Research and Development Initiative. A multi-disciplinary team of University of Missouri researchers rose to the big data challenge when they solved a major biological question by using a groundbreaking computer algorithm to find identical DNA sequences in different plant and animal species.

"Our algorithm found identical sequences of DNA located at completely different places on multiple plant genomes," said Dmitry Korkin, lead author and assistant professor of computer science. "No one has ever been able to do that before on such a scale."

"Our discovery helps solve some of the mysteries of plant evolution," said Gavin Conant, co-author and assistant professor of animal sciences. "Basic research on the plant genome provides raw materials and improves techniques for creating medicines and crops."

Previous studies found long strings of identical code in different species of animals' DNA. But before this new MU research, which was published in the Proceedings of the National Academy of Sciences, computer programs had never been powerful enough to find identical sequences in plant DNAs, because the identical sections weren't found at the same points.

The genomes of six animals (dog, chicken, human, mouse, macaque and rat) were compared to each other. Likewise, six plant species (Arabidopsis, soybean, rice, cottonwood, sorghum and grape) were compared to each other. Comparing all the genetic sequences took 4 weeks with 48 computer processors doing 1 million searches per hour for a grand total of approximately 32 billion searches.

Although the scientists found identical sequences between plant species, just as they did between animals, they suggested the sequences evolved differently.

"You would expect to see convergent evolution, but we don't," Conant said. "Plants and animals are both complex multi-cellular organisms that have to deal with many of the same environmental conditions, like taking in air and water and dealing with weather variations, but their genomes code for solutions to these challenges in different ways."

The MU team's research laid the groundwork for future studies into the reasons plants and animals developed different genetic mechanisms and how they function. Their basic research created a foundation for discoveries that may improve human life. Besides advancing genetic science's potential to fight disease, the code-analyzing computer program itself could help in the development of new medicines.

"The same algorithm can be used to find identical sequential patterns in an organism's entire set of proteins," said Korkin. "That could potentially lead to finding new targets for existing drugs or studying these drugs' side effects."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Missouri-Columbia.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, June 16, 2012

National study ranks city governments' use of social media

ScienceDaily (Mar. 22, 2012) — More than six times as many big city governments reached citizens via Facebook in 2011 compared to 2009, while use of YouTube and Twitter grew fourfold and threefold respectively, a new study indicates.

Karen Mossberger, head of the University of Illinois at Chicago's public administration graduate program, and Yonghong Wu, associate professor, analyzed and ranked the online interactivity, transparency and accessibility of the country's 75 largest cities from March through May 2011. They used the data to compile the Civic Engagement Index, and compared it with their findings from a study they conducted in 2009.

The cities' rankings reflected opportunities for citizen participation and information, including: -- hosting of open data portals -- comments allowed on blogs and social networks -- the extent to which online discussions concerned policy as well as city services -- information on officials, budgets, city council meetings and neighborhood issues

New York and Seattle tied for first place, followed by Virginia Beach, Va.; Portland, Ore.; San Francisco; and Kansas City, Mo., the study reported.

Mossberger said the top-ranked city governments have made technology a priority, especially for transparency or civic engagement.

"Seattle has long been an innovator in this area, with programs to address the digital divide online and offline. New York has long used the web for transparency," she said.

Chicago tied with San Diego and Minneapolis at 17th. Toledo ranked last.

The complete rankings may be seen at http://www.uic.edu/cuppa/ipce/research.shtml.

Twitter was used by 87 percent of the cities, compared with 25 percent in 2009. Facebook also was used by 87 percent of the cities, up from 13 percent. YouTube links appeared on the websites of 75 percent of the cities, up from 16 percent.

Nearly all city sites allowed comments on Twitter, Facebook and YouTube and presented policy content such as discussions of city budgets.

"In Chicago, for example, the Emanuel administration solicited budget ideas last summer on Twitter," Mossberger said. "Louisville Mayor Greg Fischer regularly holds a virtual 'Talk to Greg' on Facebook and Twitter. Seattle is experimenting with platforms like the IdeaScale, where users can submit and rate ideas."

Open data portals were found in only 12 cities: Baltimore, Boston, Chicago, Honolulu, Louisville, Ky., Milwaukee, New York, Philadelphia, Portland, Ore., San Francisco, Seattle, and Washington, D.C. The portals allow users access to city data on crime, budgets, Freedom of Information Act requests, city facilities, vacant land, building permits and other matters.

The researchers note that some information is not formatted for easy use by average citizens.

"For example, cities often post files that require special software, such as geographic information system software. And budget data can be difficult for citizens to understand," Mossberger said.

Mossberger predicts that new apps may make information on data portals more usable. Apps designed in competitions in Chicago, New York, and Washington, D.C. have focused on civic engagement as well as city services.

"First-place winners in New York and Chicago addressed traffic and parking," Mossberger said. "But in Chicago, another winning app allowed residents to contribute ideas for a park. Other apps have been designed to track lobbyists in Chicago."

In another study confined to Illinois' 20 largest cities, the researchers found that 55 percent of the cities used Twitter, Facebook and YouTube in 2011, compared to 15 percent for Twitter, 10 percent for Facebook, and 10 percent for YouTube in 2009.

"Ultimately, the impact of these tools depends on factors other than technology -- the quality of the information, local government practices and citizen response," Mossberger said.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Illinois at Chicago, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, June 15, 2012

Computer scientist leads the way to the next revolution in artificial intelligence

ScienceDaily (Apr. 2, 2012) — As computer scientists this year celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing, who set out the basis for digital computing in the 1930s to anticipate the electronic age, they still quest after a machine as adaptable and intelligent as the human brain.

Now, computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing's work to its next logical step. She is translating her 1993 discovery of what she has dubbed "Super-Turing" computation into an adaptable computational system that learns and evolves, using input from the environment in a way much more like our brains do than classic Turing-type computers. She and her post-doctoral research colleague Jeremie Cabessa report on the advance in the current issue of Neural Computation.

"This model is inspired by the brain," she says. "It is a mathematical formulation of the brain's neural networks with their adaptive abilities." The authors show that when the model is installed in an environment offering constant sensory stimuli like the real world, and when all stimulus-response pairs are considered over the machine's lifetime, the Super Turing model yields an exponentially greater repertoire of behaviors than the classical computer or Turing model. They demonstrate that the Super-Turing model is superior for human-like tasks and learning.

"Each time a Super-Turing machine gets input it literally becomes a different machine," Siegelmann says. "You don't want this for your PC. They are fine and fast calculators and we need them to do that. But if you want a robot to accompany a blind person to the grocery store, you'd like one that can navigate in a dynamic environment. If you want a machine to interact successfully with a human partner, you'd like one that can adapt to idiosyncratic speech, recognize facial patterns and allow interactions between partners to evolve just like we do. That's what this model can offer."

Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed. They can look intelligent if they've been told what to expect and how to respond, Siegelmann says. But they can't take in new information or use it to improve problem-solving, provide richer alternatives or perform other higher-intelligence tasks.

In 1948, Turing himself predicted another kind of computation that would mimic life itself, but he died without developing his concept of a machine that could use what he called "adaptive inference." In 1993, Siegelmann, then at Rutgers, showed independently in her doctoral thesis that a very different kind of computation, vastly different from the "calculating computer" model and more like Turing's prediction of life-like intelligence, was possible. She published her findings in Science and in a book shortly after.

"I was young enough to be curious, wanting to understand why the Turing model looked really strong," she recalls. "I tried to prove the conjecture that neural networks are very weak and instead found that some of the early work was faulty. I was surprised to find out via mathematical analysis that the neural models had some capabilities that surpass the Turing model. So I re-read Turing and found that he believed there would be an adaptive model that was stronger based on continuous calculations."

Each step in Siegelmann's model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, ?0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann's most recent analysis demonstrates that Super-Turing computation has 2?0, possible behaviors. "If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe," she explains.

The new Super-Turing machine will not only be flexible and adaptable but economical. This means that when presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, using only as much attention as needed, is another hallmark of high artificial intelligence, Siegelmann says.

"If a Turing machine is like a train on a fixed track, a Super-Turing machine is like an airplane. It can haul a heavy load, but also move in endless directions and vary its destination as needed. The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain," she adds.

Siegelmann and two colleagues recently were notified that they will receive a grant to make the first ever Super-Turing computer, based on Analog Recurrent Neural Networks. The device is expected to introduce a level of intelligence not seen before in artificial computation.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Massachusetts at Amherst.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Jérémie Cabessa, Hava T. Siegelmann. The Computational Power of Interactive Recurrent Neural Networks. Neural Computation, 2012; 24 (4): 996 DOI: 10.1162/NECO_a_00263

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Android vulnerability debugged

ScienceDaily (Apr. 12, 2012) — A group of Italian researchers have discovered and neutralized a serious vulnerability present in all versions of Android, the popular operating system developed by Google specifically for smartphones and tablet computers. The vulnerability could have been easily exploited by malicious software applications, with the effect of making devices based on Google's operating system currently on the market completely unusable. The solution proved to be effective and will be included in a future update.

The work was conducted by researchers working in various Italian universities and research centers: Prof. Alessandro Armando, Head of the "Security & Trust" Research Unit at the Bruno Kessler Foundation in Trento and coordinator of the DIST's Artificial Intelligence Laboratory at the University of Genoa, prof. Alessio Merlo (Telematic University E-Campus), Prof. Mauro Migliardi (coordinator of the Green Energy Aware Security at the University of Padua) and Luca Verderame (recent graduate in Computer Engineering at the University of Genoa).

The team of researchers promptly reported the vulnerability to Google and to the Android "security team," providing a detailed analysis of related risks. It also designed a solution that was verified by the security team of Android, and that -- given its effectiveness -- will be adopted in a future operating system update.

If it had not been neutralized, the vulnerability discovered by the Italian team would have allowed a malicious application software (malware) to saturate the physical resources of the device, leading to complete blockage of both Android-based smartphones and Tablet computers. An especially insidious problem because this particular application does not require any authorization during installation and would tend to appear harmless to the user.

This research will be published on the proceedings of the "27th IFIP International Information Security and Privacy Conference -- SEC 2012" (Heraklion, Crete, Greece, June 4-6, 2012).

Technical Information

The identified vulnerability is based on a defect in the control of communication between applications and vital components of Android that allows to systematically exhaust the memory resources of the device by the generation of an arbitrarily large number of processes. The fundamental principle of the security of Android is the total separation between the applications (sandboxing) to ensure that each of these cannot affect in any way the operation of the others. The team of Italian researchers showed that this separation is violated in current systems and indicated the solution to be able to restore it.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Fondazione Bruno Kessler, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, June 14, 2012

Self-sculpting sand: Heaps of 'smart sand’ could assume any shape, form new tools or duplicatie broken parts

ScienceDaily (Apr. 2, 2012) — Imagine that you have a big box of sand in which you bury a tiny model of a footstool. A few seconds later, you reach into the box and pull out a full-size footstool: The sand has assembled itself into a large-scale replica of the model.

That may sound like a scene from a Harry Potter novel, but it's the vision animating a research project at the Distributed Robotics Laboratory (DRL) at MIT's Computer Science and Artificial Intelligence Laboratory. At the IEEE International Conference on Robotics and Automation in May DRL researchers will present a paper describing algorithms that could enable such "smart sand." They also describe experiments in which they tested the algorithms on somewhat larger particles -- cubes about 10 millimeters to an edge, with rudimentary microprocessors inside and very unusual magnets on four of their sides.

Unlike many other approaches to reconfigurable robots, smart sand uses a subtractive method, akin to stone carving, rather than an additive method, akin to snapping LEGO blocks together. A heap of smart sand would be analogous to the rough block of stone that a sculptor begins with. The individual grains would pass messages back and forth and selectively attach to each other to form a three-dimensional object; the grains not necessary to build that object would simply fall away. When the object had served its purpose, it would be returned to the heap. Its constituent grains would detach from each other, becoming free to participate in the formation of a new shape.

Distributed intelligence

Algorithmically, the main challenge in developing smart sand is that the individual grains would have very few computational resources. "How do you develop efficient algorithms that do not waste any information at the level of communication and at the level of storage?" asks Daniela Rus, a professor of computer science and engineering at MIT and a co-author on the new paper, together with her student Kyle Gilpin. If every grain could simply store a digital map of the object to be assembled, "then I can come up with an algorithm in a very easy way," Rus says. "But we would like to solve the problem without that requirement, because that requirement is simply unrealistic when you're talking about modules at this scale." Furthermore, Rus says, from one run to the next, the grains in the heap will be jumbled together in a completely different way. "We'd like to not have to know ahead of time what our block looks like," Rus says.

Conveying shape information to the heap with a simple physical model -- such as the tiny footstool -- helps address both of these problems. To get a sense of how the researchers' algorithm works, it's probably easiest to consider the two-dimensional case. Picture each grain of sand as a square in a two-dimensional grid. Now imagine that some of the squares -- say, in the shape of a footstool -- are missing. That's where the physical model is embedded.

According to Gilpin-author on the new paper, the grains first pass messages to each other to determine which have missing neighbors. (In the grid model, each square could have eight neighbors.) Grains with missing neighbors are in one of two places: the perimeter of the heap or the perimeter of the embedded shape.

Once the grains surrounding the embedded shape identify themselves, they simply pass messages to other grains a fixed distance away, which in turn identify themselves as defining the perimeter of the duplicate. If the duplicate is supposed to be 10 times the size of the original, each square surrounding the embedded shape will map to 10 squares of the duplicate's perimeter. Once the perimeter of the duplicate is established, the grains outside it can disconnect from their neighbors.

Rapid prototyping

The same algorithm can be varied to produce multiple, similarly sized copies of a sample shape, or to produce a single, large copy of a large object. "Say the tire rod in your car has sheared," Gilpin says. "You could duct tape it back together, put it into your system and get a new one."

The cubes -- or "smart pebbles" -- that Gilpin and Rus built to test their algorithm enact the simplified, two-dimensional version of the system. Four faces of each cube are studded with so-called electropermanent magnets, materials that can be magnetized or demagnetized with a single electric pulse. Unlike permanent magnets, they can be turned on and off; unlike electromagnets, they don't require a constant current to maintain their magnetism. The pebbles use the magnets not only to connect to each other but also to communicate and to share power. Each pebble also has a tiny microprocessor, which can store just 32 kilobytes of program code and has only two kilobytes of working memory.

The pebbles have magnets on only four faces, Gilpin explains, because, with the addition of the microprocessor and circuitry to regulate power, "there just wasn't room for two more magnets." But Gilpin and Rus performed computer simulations showing that their algorithm would work with a three-dimensional block of cubes, too, by treating each layer of the block as its own two-dimensional grid. The cubes discarded from the final shape would simply disconnect from the cubes above and below them as well as those next to them.

True smart sand, of course, would require grains much smaller than 10-millimeter cubes. But according to Robert Wood, an associate professor of electrical engineering at Harvard University, that's not an insurmountable obstacle. "Take the core functionalities of their pebbles," says Wood, who directs Harvard's Microrobotics Laboratory. "They have the ability to latch onto their neighbors; they have the ability to talk to their neighbors; they have the ability to do some computation. Those are all things that are certainly feasible to think about doing in smaller packages."

"It would take quite a lot of engineering to do that, of course," Wood cautions. "That's a well-posed but very difficult set of engineering challenges that they could continue to address in the future."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology. The original article was written by Larry Hardesty, MIT News Office.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

New robots can continuously map their environment with low-cost camera

ScienceDaily (Feb. 16, 2012) — Robots could one day navigate through constantly changing surroundings with virtually no input from humans, thanks to a system that allows them to build and continuously update a three-dimensional map of their environment using a low-cost camera such as Microsoft's Kinect.

The system, being developed by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), could also allow blind people to make their way unaided through crowded buildings such as hospitals and shopping malls.

To explore unknown environments, robots need to be able to map them as they move around -- estimating the distance between themselves and nearby walls, for example -- and to plan a route around any obstacles, says Maurice Fallon, a research scientist at CSAIL who is developing these systems alongside John J. Leonard, professor of mechanical and ocean engineering, and graduate student Hordur Johannsson.

But while a large amount of research has been devoted to developing one-off maps that robots can use to navigate around an area, these systems cannot adjust to changes in the surroundings over time, Fallon says: "If you see objects that were not there previously, it is difficult for a robot to incorporate that into its map."

The new approach, based on a technique called Simultaneous Localization and Mapping (SLAM), will allow robots to constantly update a map as they learn new information over time, he says. The team has previously tested the approach on robots equipped with expensive laser-scanners, but in a paper to be presented this May at the International Conference on Robotics and Automation in St. Paul, Minn., they have now shown how a robot can locate itself in such a map with just a low-cost Kinect-like camera.

As the robot travels through an unexplored area, the Kinect sensor's visible-light video camera and infrared depth sensor scan the surroundings, building up a 3-D model of the walls of the room and the objects within it. Then, when the robot passes through the same area again, the system compares the features of the new image it has created -- including details such as the edges of walls, for example -- with all the previous images it has taken until it finds a match.

At the same time, the system constantly estimates the robot's motion, using on-board sensors that measure the distance its wheels have rotated. By combining the visual information with this motion data, it can determine where within the building the robot is positioned. Combining the two sources of information allows the system to eliminate errors that might creep in if it relied on the robot's on-board sensors alone, Fallon says.

Once the system is certain of its location, any new features that have appeared since the previous picture was taken can be incorporated into the map by combining the old and new images of the scene, Fallon says.

The team tested the system on a robotic wheelchair, a PR2 robot developed by Willow Garage in Menlo Park, Calif., and in a portable sensor suit worn by a human volunteer. They found it could locate itself within a 3-D map of its surroundings while traveling at up to 1.5 meters per second.

Ultimately, the algorithm could allow robots to travel around office or hospital buildings, planning their own routes with little or no input from humans, Fallon says.

It could also be used as a wearable visual aid for blind people, allowing them to move around even large and crowded buildings independently, says Seth Teller, head of the Robotics, Vision and Sensor Networks group at CSAIL and principal investigator of the human-portable mapping project. "There are also a lot of military applications, like mapping a bunker or cave network to enable a quick exit or re-entry when needed," he says. "Or a HazMat team could enter a biological or chemical weapons site and quickly map it on foot, while marking any hazardous spots or objects for handling by a remediation team coming later. These teams wear so much equipment that time is of the essence, making efficient mapping and navigation critical."

While a great deal of research is focused on developing algorithms to allow robots to create maps of places they have visited, the work of Fallon and his colleagues takes these efforts to a new level, says Radu Rusu, a research scientist at Willow Garage who was not involved in this project. That is because the team is using the Microsoft Kinect sensor to map the entire 3-D space, not just viewing everything in two dimensions.

"This opens up exciting new possibilities in robot research and engineering, as the old-school 'flatland' assumption that the scientific community has been using for many years is fundamentally flawed," he says. "Robots that fly or navigate in environments with stairs, ramps and all sorts of other indoor architectural elements are getting one step closer to actually doing something useful. And it all starts with being able to navigate."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology. The original article was written by Helen Knight.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here