Google Search

Saturday, September 29, 2012

Simulating reality: Less memory required on quantum computer than on classical computer, study shows

ScienceDaily (May 3, 2012) — Simulations of reality would require less memory on a quantum computer than on a classical computer, new research from scientists at the University of Bristol, published in Nature Communications, has shown.

The study by Dr Karoline Wiesner from the School of Mathematics and Centre for Complexity Sciences, together with researchers from the Centre for Quantum Technologies in Singapore, demonstrates a new way in which computers based on quantum physics could beat the performance of classical computers.

When confronted with a complicated system, scientists typically strive to identify underlying simplicity which is then articulated as natural laws and fundamental principles. However, complex systems often seem immune to this approach, making it difficult to extract underlying principles.

Researchers have discovered that complex systems can be less complex than originally thought if they allow quantum physics to help: quantum models of complex systems are simpler and predict their behaviour more efficiently than classical models.

A good measure of the complexity of a particular system or process is how predictable it is. For example, the outcome of a fair coin toss is inherently unpredictable and any resources (beyond a random guess) spent on predicting it would be wasted. Therefore, the complexity of such a process is zero.

Other systems are quite different, for example neural spike sequences (which indicate how sensory and other information is represented in the brain) or protein conformational dynamics (how proteins -- the molecules that facilitate biological functions -- undergo structural rearrangement). These systems have memory and are predictable to some extent; they are more complex than a coin toss.

The operation of such complex systems in many organisms is based on a simulation of reality. This simulation allows the organism to predict and thus react to the environment around it. However, if quantum dynamics can be exploited to make identical predictions with less memory, then such systems need not be as complex as originally thought.

Dr Wiesner added: "On a more fundamental level, we found that the efficiency of prediction still does not reach the lower bound given by the principles of thermodynamics -- there is room for improvement. This might hint at a source of temporal asymmetry within the framework of quantum mechanics; that it is fundamentally impossible to simulate certain observable statistics reversibly and hence with perfect efficiency."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Bristol University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Mile Gu, Karoline Wiesner, Elisabeth Rieper, Vlatko Vedral. Quantum mechanics can reduce the complexity of classical models. Nature Communications, 2012; 3: 762 DOI: 10.1038/ncomms1761

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, September 28, 2012

Who’s the most influential in a social graph? New software recognizes key influencers faster than ever

ScienceDaily (Sep. 7, 2012) — At an airport, many people are essential for planes to take off. Gate staffs, refueling crews, flight attendants and pilots are in constant communication with each other as they perform required tasks. But it's the air traffic controller who talks with every plane, coordinating departures and runways. Communication must run through her in order for an airport to run smoothly and safely.

In computational terms, the air traffic controller is the "betweenness centrality," the most connected person in the system. In this example, finding the key influencer is easy because each departure process is nearly the same.

Determining the most influential person on a social media network (or, in computer terms, a graph) is more complex. Thousands of users are interacting about a single subject at the same time. New people (known computationally as edges) are constantly joining the streaming conversation.

Georgia Tech has developed a new algorithm that quickly determines betweenness centrality for streaming graphs. The algorithm can identify influencers as information changes within a network. The first-of-its-kind streaming tool was presented this week by Computational Science and Engineering Ph.D. candidate Oded Green at the Social Computing Conference in Amsterdam.

"Unlike existing algorithms, our system doesn't restart the computational process from scratch each time a new edge is inserted into a graph," said College of Computing Professor David Bader, the project's leader. "Rather than starting over, our algorithm stores the graph's prior centrality data and only does the bare minimal computations affected by the inserted edges."

In some cases, betweenness centrality can be computed more than 100 times faster using the Georgia Tech software. The open source software will soon be available to businesses.

Bader, the Institute's executive director for high performance computing, says the technology has wide-ranging applications. For instance, advertisers could use the software to identify which celebrities are most influential on Twitter or Facebook, or both, during product launches.

"Despite a fragmented social media landscape, data analysts would be able to use the algorithm to look at each social media network and mark inferences about a single influencer across these different platforms," said Bader.

As another example, the algorithm could be used for traffic patterns during a wreck or traffic jam. Transportation officials could quickly determine the best new routes based on gradual side-street congestion.

The accepted paper was co-authored by Electrical and Computer Engineering Ph.D. candidate Rob McColl.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Georgia Institute of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, September 27, 2012

Self-adapting computer network that defends itself against hackers?

ScienceDaily (May 10, 2012) — In the online struggle for network security, Kansas State University cybersecurity experts are adding an ally to the security force: the computer network itself.

Scott DeLoach, professor of computing and information sciences, and Xinming "Simon" Ou, associate professor of computing and information sciences, are researching the feasibility of building a computer network that could protect itself against online attackers by automatically changing its setup and configuration.

DeLoach and Ou were recently awarded a five-year grant of more than $1 million from the Air Force Office of Scientific Research to fund the study "Understanding and quantifying the impact of moving target defenses on computer networks." The study, which began in April, will be the first to document whether this type of adaptive cybersecurity, called moving-target defense, can be effective. If it can work, researchers will determine if the benefits of creating a moving-target defense system outweigh the overhead and resources needed to build it.

Helping Ou and DeLoach in their investigation and research are Kansas State University students Rui Zhuang and Su Zhang, both doctoral candidates in computing and information sciences from China, and Alexandru Bardas, doctoral student in computing and information sciences from Romania.

As the study progresses the computer scientists will develop a set of analytical models to determine the effectiveness of a moving-target defense system. They will also create a proof-of-concept system as a way to experiment with the idea in a concrete setting.

"It's important to investigate any scientific evidence that shows that this approach does work so it can be fully researched and developed," DeLoach said. He started collaborating with Ou to apply intelligent adaptive techniques to cybersecurity several years ago after a conversation at a university open house.

The term moving-target defense -- a subarea of adaptive security in the cybersecurity field -- was first coined around 2008, although similar concepts have been proposed and studied since the early 2000s. The idea behind moving-target defense in the context of computer networks is to create a computer network that is no longer static in its configuration. Instead, as a way to thwart cyber attackers, the network automatically and periodically randomizes its configuration through various methods -- such as changing the addresses of software applications on the network; switching between instances of the applications; and changing the location of critical system data.

Ou and DeLoach said the key is to make the network appear to an attacker that it is changing chaotically while to an authorized user the system operates normally.

"If you have a Web server, pretty much anybody in the world can figure out where you are and what software you're running," DeLoach said. "If they know that, they can figure out what vulnerabilities you have. In a typical scenario, attackers scan your system and find out everything they can about your server configuration and what security holes it has. Then they select the best time for them to attack and exploit those security holes in order to do the most damage. This could change that."

Creating a computer network that could automatically detect and defend itself against cyber attacks would substantially increase the security of online data for universities, government departments, corporations and businesses -- all of which have been the targets of large-scale cyber attacks.

In February 2011 it was discovered that the Nasdaq Stock Market's computer network had been infiltrated by hackers. Although federal investigators concluded that it was unlikely the hackers stole any information, the network's security had been left vulnerable for more than a year while the hackers visited it numerous times.

According to Ou, creating a moving-target defense system would shift the power imbalance that currently resides with hackers -- who need only find a single security hole to exploit -- back to the network administrators -- who would have a system that frequently removes whatever security privileges attackers may gain with a new clean slate.

"This is a game-changing idea in cybersecurity," Ou said. "People feel that we are currently losing against online attackers. In order to fundamentally change the cybersecurity landscape and reduce that high risk we need some big, fundamental changes to the way computers and networks are constructed and organized."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Kansas State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

'Game-powered machine learning' opens door to Google for music

ScienceDaily (May 4, 2012) — Can a computer be taught to automatically label every song on the Internet using sets of examples provided by unpaid music fans? University of California, San Diego engineers have found that the answer is yes, and the results are as accurate as using paid music experts to provide the examples, saving considerable time and money. In results published in the April 24 issue of the Proceedings of the National Academy of Sciences, the researchers report that their solution, called "game-powered machine learning," would enable music lovers to search every song on the web well beyond popular hits, with a simple text search using key words like "funky" or "spooky electronica."

Searching for specific multimedia content, including music, is a challenge because of the need to use text to search images, video and audio. The researchers, led by Gert Lanckriet, a professor of electrical engineering at the UC San Diego Jacobs School of Engineering, hope to create a text-based multimedia search engine that will make it far easier to access the explosion of multimedia content online. That's because humans working round the clock labeling songs with descriptive text could never keep up with the volume of content being uploaded to the Internet. For example, YouTube users upload 60 hours of video content per minute, according to the company.

In Lanckriet's solution, computers study the examples of music that have been provided by the music fans and labeled in categories such as "romantic," "jazz," "saxophone," or "happy." The computer then analyzes waveforms of recorded songs in these categories looking for acoustic patterns common to each. It can then automatically label millions of songs by recognizing these patterns. Training computers in this way is referred to as machine learning. "Game-powered" refers to the millions of people who are already online that Lanckriet's team is enticing to provide the sets of examples by labeling music through a Facebook-based online game called Herd It (http://apps.facebook.com/herd-it).

"This is a very promising mechanism to address large-scale music search in the future," said Lanckriet, whose research earned him a spot on MIT Technology Review's list of the world's top young innovators in 2011.

Another significant finding in the paper is that the machine can use what it has learned to design new games that elicit the most effective training data from the humans in the loop. "The question is if you have only extracted a little bit of knowledge from people and you only have a rudimentary machine learning system, can the computer use that rudimentary version to determine the most effective next questions to ask the people?" said Lanckriet. "It's like a baby. You teach it a little bit and the baby comes back and asks more questions." For example, the machine may be great at recognizing the music patterns in rock music but struggle with jazz. In that case, it might ask for more examples of jazz music to study.

It's the active feedback loop that combines human knowledge about music and the scalability of automated music tagging through machine learning that makes "Google for music" a real possibility. Although human knowledge about music is essential to the process, Lanckriet's solution requires relatively little human effort to achieve great gains. Through the active feedback loop, the computer automatically creates new Herd It games to collect the specific human input it needs to most effectively improve the auto-tagging algorithms, said Lanckriet. The game goes well beyond the two primary methods of categorizing music used today: paying experts in music theory to analyze songs -- the method used by Internet radio sites like Pandora -- and collaborative filtering, which online book and music sellers now use to recommend products by comparing a buyer's past purchases with those of people who made similar choices.

Both methods are effective up to a point. But paid music experts are expensive and can't possibly keep up with the vast expanse of music available online. Pandora has just 900,000 songs in its catalog after 12 years in operation. Meanwhile, collaborative filtering only really works with books and music that are already popular and selling well.

The big picture: Personalized radio

Lanckriet foresees a time when -- thanks to this massive database of cataloged music -- cell phone sensors will track the activities and moods of individual cell phone users and use that data to provide a personalized radio service -- the kind that matches music to one's activity and mood, without repeating the same songs over and over again.

"What I would like long-term is just one single radio station that starts in the morning and it adapts to you throughout the day. By that I mean the user doesn't have to tell the system, "Hey, it's afternoon now, I prefer to listen to hip hop in the afternoon. The system knows because it has learned the cell phone user's preferences."

This kind of personalized cell phone radio can only be made possible if the cell phone has a large database of accurately labeled songs from which to choose. That's where efforts to develop a music search engine are ultimately heading. The first step is figuring out how to label all the music online well beyond the most popular hits. As Lanckriet's team demonstrated in PNAS, game-powered machine learning is making that a real possibility.

Lanckriet's research is funded by the National Science Foundation, National Institutes of Health, the Alfred P. Sloan Foundation, Google, Yahoo!, Qualcomm, IBM and eHarmony. You can watch a video about the research and Lanckriet's auto-tagging algorithms to learn more.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - San Diego.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

L. Barrington, D. Turnbull, G. Lanckriet. Game-powered machine learning. Proceedings of the National Academy of Sciences, 2012; 109 (17): 6411 DOI: 10.1073/pnas.1014748109

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, September 26, 2012

Dynamic view of city created based on Foursquare check-in data

ScienceDaily (May 1, 2012) — The millions of "check-ins" generated by foursquare, the location-based social networking site, can be used to create a dynamic view of a city's workings and character, Carnegie Mellon University researchers say. In contrast to static neighborhood boundaries and dated census figures, these "Livehoods" reflect the ever-changing patterns of city life.

Researchers from the School of Computer Science (SCS) have developed an algorithm that takes the check-ins generated when foursquare members visit participating businesses or venues, and clusters them based on a combination of the location of the venues and the groups of people who most often visit them. This information is then mapped to reveal a city's Livehoods, a term coined by the SCS researchers.

Maps for New York, San Francisco and Pittsburgh are available on the project website, http://livehoods.org/. People can help choose the next city to map by voting on the Livehoods Facebook page.

"Our goal is to understand how cities work through the lens of social media," said Justin Cranshaw, a Ph.D. student in SCS's Institute for Software Research.

Part of the emerging field of Urban Computing, the Livehoods project takes advantage of the proliferation of smartphones and the location-based services they make possible. In this case, the researchers analyzed data from foursquare, but the same computational techniques could be applied to many location-based databases.

Livehoods thus provide a powerful new tool that could be used to address a wide variety of urban problems and opportunities. The researchers are exploring applications to city planning, transportation and real estate development. Livehoods also could be useful for businesses developing marketing campaigns or for public health officials tracking the spread of disease.

"In urban studies, researchers have always had to interview lots of people to get a sense of a community's character and, even then, they must extrapolate from only a small sample of the community," said Raz Schwartz, a Ph.D. student at Bar-Ilan University, Israel, and a visiting scholar at SCS's Human-Computer Interaction Institute. "Now, by using foursquare data, we're able to tap a large database that can be continually updated."

The Livehoods project is led by Norman Sadeh, professor and co-director of the Institute for Software Research's Ph.D. program in Computation, Organizations and Society, and Jason Hong, associate professor in the Human-Computer Interaction Institute. The team will present its findings June 5 at the International AAAI Conference on Weblogs and Social Media (ICWSM) in Dublin, Ireland.

All of the Livehoods analysis is based on foursquare check-ins that users have shared publicly via social networks such as Twitter. This dataset of 18 million check-ins includes user ID, time, latitude and longitude, and the name and category of the venue for each check-in.

In their study of the Pittsburgh metropolitan area, the researchers found that the Livehoods they identified sometimes spilled over existing neighborhood boundaries, or identified several communities within a neighborhood. The Pittsburgh analysis was based on 42,787 check-ins by 3,840 users at 5,349 venues.

For instance, they found that the upscale neighborhood of Shadyside actually had two demographically distinct Livehoods -- an older, staid community to the west and a younger, "indie" community to the east. Moreover, the younger Livehood spilled over into East Liberty, a neighborhood that long suffered from decay but recently has seen some upscale development.

"That makes sense to me," observed a 24-year-old resident of eastern Shadyside, one of 27 Pittsburgh residents who were interviewed by researchers to validate the findings. "I think at one point it was more walled off and this was poor (East Liberty) and this was wealthy (Shadyside) and now there are nice places in East Liberty and there's some more diversity in this area (eastern Shadyside)."

The researchers found that one Pittsburgh neighborhood, the South Side Flats, contained four distinct Livehoods, including one centered on bars popular with college students, another centered on a new shopping district dominated by chain stores and another that focused on a supermarket. Again, these Livehoods made sense to residents familiar with the area.

The study has limitations. Foursquare users tend to be young, urban professionals with smartphones. Consequently, areas of cities with older, poorer populations are nearly blank in the Livehoods maps. "You can literally see the digital divide," Schwartz said. Likewise, foursquare members don't check-in at all of their destinations -- hospitals, for instance. But the researchers contend that the limitations are those of the data, not the methodology.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Carnegie Mellon University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, September 25, 2012

Scientist invents pocket living room TV

ScienceDaily (Aug. 13, 2012) — Leaving your TV show midway because you had to leave your home will no longer happen as you can now 'pull' the programme on your TV screen onto your tablet and continue watching it seamlessly.

You can also watch the same TV show or movie together with your family and friends, no matter which part of the world they are in. Not only that but you'll be able to discuss the show, whether you are on your personal tablet or smart phone, through a channel of your choice, be it video chat, voice or text.

The world's first 'pick up and throw back' video feature allows your video and chat sessions to be screened wherever you go, providing continuous social engagement in today's world.

This innovative multi-screen mobile social TV experience is now being made into reality by Assistant Professor Wen Yonggang from the School of Computer Engineering, Nanyang Technological University (NTU). It has already attracted the attention of both local and international telecommunication giants who have expressed interest in integrating this technology into their existing cable networks as a market differentiator for cable television and mobile networks. With discussions currently underway, the public here can expect to see videos and TV shows on the go together with their friends on the 'cloud' in about two years' time.

According to a 2012 report by Global Industry Analysts Inc, the global entertainment industry is set to reach US$1.4 trillion by 2015. This system will be designed to tap on the technology convergence in the multimedia market, which includes smart phones, tablets, computers and television, and help to boost Singapore's ambition to be a digital media hub in Asia.

Assistant Professor Wen has described his invention as the next frontier of television experience as you can now "bring the social experience of watching television in your living room wherever you go."

Named the "Social Cloud TV," this system allows you to watch TV programmes and online videos with your family and friends at the same time. The system leverages a cloud backend for media processing (e.g., video transcoding), such that the same video can be streamed into devices in the most suitable format. When viewing a TV show or perhaps a live soccer match, you can invite family and friends to join your session, from either your phone book or social networking contact lists.

Currently patent pending, this human-computer interaction technology enables the same show on the TV or computer to be brought to mobile devices seamlessly and migrated across multiple screens (e.g., TV, laptop, smartphone and tablet) without a hitch.

"You could watch a video with your class mates on the computer, and just before you leave school, 'pull' the show into your tablet and continue watching on the go," said Assistant Professor Wen.

"Upon reaching home, you could just turn on your television and 'throw' the video back to the TV, and continue watching the programme there."

"With the increase in online video and personal multimedia devices, we have lost out on the experience of watching TV shows together as a family and as a social activity with friends. So I hope that with my invention, people can now reconnect with each other socially using videos."

The social TV software will also allow users to share their own content, online videos and TV programmes with others easily over social networks such as Facebook and Twitter.

This prototype took one and a half years to develop. The research team, inclusive of Assistant Professor Wen, consists of nine members, of which three were undergraduate students.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Nanyang Technological University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Scientists create chemical 'brain': Giant network links all known compounds and reactions

ScienceDaily (Aug. 22, 2012) — Northwestern University scientists have connected 250 years of organic chemical knowledge into one giant computer network -- a chemical Google on steroids. This "immortal chemist" will never retire and take away its knowledge but instead will continue to learn, grow and share.

A decade in the making, the software optimizes syntheses of drug molecules and other important compounds, combines long (and expensive) syntheses of compounds into shorter and more economical routes and identifies suspicious chemical recipes that could lead to chemical weapons.

"I realized that if we could link all the known chemical compounds and reactions between them into one giant network, we could create not only a new repository of chemical methods but an entirely new knowledge platform where each chemical reaction ever performed and each compound ever made would give rise to a collective 'chemical brain,'" said Bartosz A. Grzybowski, who led the work. "The brain then could be searched and analyzed with algorithms akin to those used in Google or telecom networks."

Called Chematica, the network comprises some seven million chemicals connected by a similar number of reactions. A family of algorithms that searches and analyzes the network allows the chemist at his or her computer to easily tap into this vast compendium of chemical knowledge. And the system learns from experience, as more data and algorithms are added to its knowledge base.

Details and demonstrations of the system are published in three back-to-back papers in the Aug. 6 issue of the journal Angewandte Chemie.

Grzybowski is the senior author of all three papers. He is the Kenneth Burgess Professor of Physical Chemistry and Chemical Systems Engineering in the Weinberg College of Arts and Sciences and the McCormick School of Engineering and Applied Science.

In the Angewandte paper titled "Parallel Optimization of Synthetic Pathways Within the Network of Organic Chemistry," the researchers have demonstrated algorithms that find optimal syntheses leading to drug molecules and other industrially important chemicals.

"The way we coded our algorithms allows us to search within a fraction of a second billions of chemical syntheses leading to a desired molecule," Grzybowski said. "This is very important since within even a few synthetic steps from a desired target the number of possible syntheses is astronomical and clearly beyond the search capabilities of any human chemist."

Chematica can test and evaluate every possible synthesis that exists, not only the few a particular chemist might have an interest in. In this way, the algorithms find truly optimal ways of making desired chemicals.

The software already has been used in industrial settings, Grzybowski said, to design more economical syntheses of companies' products. Synthesis can be optimized with various constraints, such as avoiding reactions involving environmentally dangerous compounds. Using the Chematica software, such green chemistry optimizations are just one click away.

Another important area of application is the shortening of synthetic pathways into the so-called "one-pot" reactions. One of the holy grails of organic chemistry has been to design methods in which all the starting materials could be combined at the very beginning and then the process would proceed in one pot -- much like cooking a stew -- all the way to the final product.

The Northwestern researchers detail how this can be done in the Angewandte paper titled "Rewiring Chemistry: Algorithmic Discovery and Experimental Validation of One-Pot Reactions in the Network of Organic Chemistry."

The chemists have taught their network some 86,000 chemical rules that check -- again, in a fraction of a second -- whether a sequence of individual reactions can be combined into a one-pot procedure. Thirty predictions of one-pot syntheses were tested and fully validated. Each synthesis proceeded as predicted and had excellent yields.

In one striking example, Grzybowski and his team synthesized an anti-asthma drug using the one-pot method. The drug typically would take four consecutive synthesis and purification steps.

"Our algorithms told us this sequence could be combined into just one step, and we were naturally curious to check it out in a flask," Grzybowski said. "We performed the one-pot reaction and obtained the drug in excellent yield and at a fraction of the cost the individual steps otherwise would have accrued."

The third area of application is the use of the Chematica network approach for predicting and monitoring syntheses leading to chemical weapons. This is reported in the Angewandte paper titled "Chemical Network Algorithms for the Risk Assessment and Management of Chemical Threats."

"Since we now have this unique ability to scrutinize all possible synthetic strategies, we also can identify the ones that a potential terrorist might use to make a nerve gas, an explosive or another toxic agent," Grzybowski said.

Algorithms known from game theory first are applied to identify the strategies that are hardest to detect by the federal government -- the use of substances, for example, such as kitchen salt, clarifiers, grain alcohol and a fertilizer, all freely available from a local convenience store. Characteristic combinations of seemingly innocuous chemicals, such as this example, are red flags.

This strategy is very different from the government's current approach of monitoring and regulating individual substances, Grzybowski said. Chematica can be used to monitor patterns of chemicals that together become suspicious, instead of monitoring individual compounds. Grzybowski is working with the federal government to implement the software.

Chematica now is being commercialized. "We chose this name," Grzybowski said, "because networks will do to chemistry what Mathematica did to scientific computing. Our approach will accelerate synthetic design and discovery and will optimize synthetic practice at large."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Northwestern University. The original article was written by Megan Fellman.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal References:

Mikolaj Kowalik, Chris M. Gothard, Aaron M. Drews, Nosheen A. Gothard, Alex Weckiewicz, Patrick E. Fuller, Bartosz A. Grzybowski, Kyle J. M. Bishop. Parallel Optimization of Synthetic Pathways within the Network of Organic Chemistry. Angewandte Chemie International Edition, 2012; 51 (32): 7928 DOI: 10.1002/anie.201202209Chris M. Gothard, Siowling Soh, Nosheen A. Gothard, Bartlomiej Kowalczyk, Yanhu Wei, Bilge Baytekin, Bartosz A. Grzybowski. Rewiring Chemistry: Algorithmic Discovery and Experimental Validation of One-Pot Reactions in the Network of Organic Chemistry. Angewandte Chemie International Edition, 2012; 51 (32): 7922 DOI: 10.1002/anie.201202155Patrick E. Fuller, Chris M. Gothard, Nosheen A. Gothard, Alex Weckiewicz, Bartosz A. Grzybowski. Chemical Network Algorithms for the Risk Assessment and Management of Chemical Threats. Angewandte Chemie International Edition, 2012; 51 (32): 7933 DOI: 10.1002/anie.201202210

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, September 24, 2012

Scanning for drunks with a thermal camera

ScienceDaily (Sep. 4, 2012) — Thermal imaging technology might one day be to identify drunks before they become a nuisance in bars, airports or other public spaces. Georgia Koukiou and Vassilis Anastassopoulos of the Electronics Laboratory, at University of Patras, Greece, are developing software that can objectively determine whether a person has consumed an excessive amount of alcohol based solely on the relative temperature of different parts of the person's face.

Writing in the International Journal Electronic Security and Digital Forensics, the team explains how such a system sidesteps the subjective judgements one might make based on behaviour and so allow law enforcement and other authorities to have definitive evidence of inebriation.

The team explains how they have devised two algorithms that can determine whether a person has been drinking alcohol to excess based on infrared thermal imaging of the person's face. The first approach simply involves measuring pixel values of specific points on the person's face, which are the compared to values in a database of scans of sober and inebriated people. Given that alcohol causes dilation of blood vessels in the surface of the skin hot spots on the face can be seen in the thermal imaging scans, which can be classified as drunk or sober regions. Similar technology has been used at international borders and elsewhere to ascertain whether a person was infected with a virus, such as flu or SARS.

In their second approach, the team assesses the thermal differences between various locations on the face and evaluates their overall values. They found that increased thermal illumination is commonly seen in the nose in an inebriated individual whereas the forehead tends to be cooler. This second system relies on the algorithm "understanding" what different parts of the face are present in the thermal image. The two techniques working in parallel could be used to quickly scan individuals entering public premises or attempting to buy more alcohol, for instance. The team points out, however, that the second technique does not need a thermal image of the sober person to determine whether that individual has been drinking.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, September 23, 2012

Frankenstein programmers test a cybersecurity monster

ScienceDaily (Aug. 27, 2012) — In order to catch a thief, you have to think like one.

UT Dallas computer scientists are trying to stay one step ahead of cyber attackers by creating their own monster. Their monster can cloak itself as it steals and reconfigures information in a computer program.

In part because of the potentially destructive nature of their technology, creators have named this software system Frankenstein, after the monster-creating scientist in author Mary Shelley's novel, Frankenstein; or The Modern Prometheus.

"Shelley's story is an example of a horror that can result from science, and similarly, we intend our creation as a warning that we need better detections for these types of intrusions," said Dr. Kevin Hamlen, associate professor of computer science at UT Dallas who created the software, along with his doctoral student Vishwath Mohan. "Criminals may already know how to create this kind of software, so we examined the science behind the danger this represents, in hopes of creating counter measures."

Frankenstein is not a computer virus, which is a program that can multiply and take over other machines. But, it could be used in cyber warfare to provide cover for a virus or another type of malware, or malicious software.

In order to avoid antivirus software, malware typically mutates every time it copies itself onto another machine. Antivirus software figures out the pattern of change and continues to scan for sequences of code that are known to be suspicious.

Frankenstein evades this scanning mechanism. It takes code from programs already on a computer and repurposes it, stringing it together to accomplish the malware's malicious task with new instructions.

"We wanted to build something that learns as it propagates," Hamlen said. "Frankenstein takes from what is already there and reinvents itself."

"Just as Shelley's monster was stitched from body parts, our Frankenstein also stitches software from original program parts, so no red flags are raised," he said. "It looks completely different, but its code is consistent with something normal."

Hamlen said Frankenstein could be used to aid government counter terrorism efforts by providing cover for infiltration of terrorist computer networks. Hamlen is part of the Cyber Security Research and Education Center in the Erik Jonsson School of Engineering and Computer Science.

The UT Dallas research is the first published example describing this type of stealth technology, Hamlen said.

"As a proof-of-concept, we tested Frankenstein on some simple algorithms that are completely benign," Hamlen said. "We did not create damage to anyone's systems."

The next step, Hamlen said, is to create more complex versions of the software.

Frankenstein was described in a paper published online (https://www.usenix.org/conference/woot12/frankenstein-stitching-malware-benign-binaries) in conjunction with a presentation at a recent USENIX Workshop on Offensive Technologies.

The research was supported by the National Science Foundation and Air Force Office of Scientific Research.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Texas, Dallas.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, September 22, 2012

Web-TV: A perfect match?

ScienceDaily (Sep. 3, 2012) — Do you surf the web in front of the TV, or tweet what you are watching? EU-funded researchers are creating technologies that combine web, social media and TV to enhance our experience and interactions across media.

Research shows that consumers watch TV and use the web simultaneously for up to 3.5 hours daily and 42 % of UK adults discussed the programmes they were watching on social networks. Digital providers and broadcasters are always trying to improve entertainment and combining social media, the web and TV into a single user experience is an important step.

NoTube, a European-funded project, brought the digital and broadcasting industries together, along with experts in platform integration, with the aim of linking media together so consumers can watch shows and interact with friends regardless of the devices they use.

'Our prototypes show that the "Web+TV" experiences which most benefit viewers and users will be those using open standards and that work across different hardware, software and service providers. We have tried to develop solutions that give viewers choice and flexibility,' explains Dan Brickley, a researcher at VU University Amsterdam, the Netherlands, one of the lead researchers in the project.

Forging links

The key to NoTube's approach is 'linked data', where information about a viewer -- such as preferences, social networks, contacts and favourite shows -- is stored 'in the cloud'. The data may be held in different databases and formats but it is made accessible by conforming to recognised industry standards for data structure, storage, access and linking.

'The concept of linked data allowed the NoTube team to set reference standards for online publishers. This made it possible, for example, for broadcasters to create personalised news environments and online programme guides, showing users what they most want to see. Moreover, these work across devices and in multiple languages,' says Brickley.

'When NoTube launched, our plan to bring the web and TV closer together via shared data models and content across multiple devices was ambitious and visionary,' Brickley continues. 'Today, the TV industry has caught up, but their cross-platform and personalised services are proprietary. The results and prototypes from NoTube are now more relevant than ever and show the way forward to develop personalised TV applications where the user still controls their data.'

With a vast array of devices and solutions marketed to viewers, it is difficult to achieve a consistent experience when linking online activity and viewing. The NoTube project looked at how this Web+TV combination could work from every angle, developing user interfaces along with underlying technology standards to support interoperability and data linking.

Is linked data secure?

The development of cross-platform solutions was a key focus of the team. 'Hardware engineers at TV companies won't necessarily be skilled at making highly usable programme guide catalogues, or recommendation engines, for example,' explains Brickley. 'As the number of TV channels increases, being able to find and filter the programmes you want will be really useful. We developed a prototype recommendation engine and sharing system which solves this problem and which can be deployed on any media platform.'

Systems using personal data must be secure and respect privacy, which is often a stumbling block for commercial solutions. 'People are often over cautious and misunderstand the risks involved, but they also need to understand how their supposedly anonymous online activities might inadvertently "fingerprint" them. It may take a few more high profile privacy controversies, like the Netflix prize lawsuit or the AOL search logs case, before users adopt healthy privacy habits', said Brickley.

Recognising that people use default settings and fail to guard personal data, the NoTube architecture builds in security to ensure linked data remains secure.

Two media, two screens, many people

NoTube also found ways of linking people viewing TV. Led by BBC R&D, the team developed methods of giving programme recommendations based on social activity and built technologies that make it easier for viewers to discuss and share TV information across their networks, whilst maintaining privacy.

This led to the development of N-screen, a web application which can help small groups decide what to watch. Users share programmes with one another in real time and change the TV channel using drag and drop -- improving the experience of viewers as they watch the same programme, whilst using a second screen to interact with each other.

The project also looked at the possibility of using a smartphone as a TV remote control. 'The key aspect of N-screen or the smartphone remote is that they work by linking different data systems; their functionality is not limited by the type of device or screen used -- giving more choice to consumers,' notes Brickley.

New experiences

The NoTube partners were keen to other functional prototypes, such as the iFanzy service that delivers personalised and contextualised advertising and TV. It uses a range of data, including time of day, device used and viewing preferences, to serve more engaging (and therefore more successful) ads. The system also improves the delivery of audio-visual advertisements by adjusting volumes and automatically selecting the best positioning on the screen.

Another major result is the NoTube TV API which broadcasters can use to build new web-based applications and systems that make TV more interactive and 'do more'. 'The API opens up a lot of what we have developed in the project to broadcasters and media companies so they can build some of our functionality into their own platforms,' Brickley comments.

Looking to the future

'We want the user to be back in the driving seat,' says Brickley. 'NoTube can help people decide what to watch and share, record their preferences, find out more about a programme and have smarter conversations about TV programmes.'

Project partners are promoting results to the technical community; they hope that forward-thinking companies will recognise the potential impact that cross-platform and open source solutions could have. 'Much of our research output and position papers are for a fairly small group of decision-makers in the TV industry and in standards organisations,' Brickley explains. 'But we have received excellent feedback and are involved in various discussions with the W3C standards community.'

The NoTube project received EUR 6.15 million (of total EUR 9.25 million project budget) in research funding under the EU's Seventh Framework Programme (FP7).

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by CORDIS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Math tree may help root out fraudsters: Applying algorithm to social networks can reveal hidden connections criminals use to commit fraud

ScienceDaily (Sep. 5, 2012) — Fraudsters beware: the more your social networks connect you and your accomplices to the crime, the easier it will be to shake you from the tree.

The Steiner tree, that is.

In an article recently published in the journal Computer Fraud and Security, University of Alberta researcher Ray Patterson and colleagues from the University of Connecticut and University of California -- Merced outlined the connection linking fraud cases and the algorithm designed by Swiss mathematician Jakob Steiner. Fraud is a problem that costs Canadians billions of dollars annually and countless hours of police investigations. Patterson says that building the algorithm into fraud investigation software may provide important strategic advantages.

The criminal path of least resistance

To quote a television gumshoe, everything's connected. Figuring out who knows who and who has access to the money is like playing a game of connect-the-dots. Patterson says that for crimes like fraud, the fewer players in the scheme, the more likely it will be accomplished. Maintaining a small group of players is also what links it to the Steiner tree. He says that by analyzing various connecting social networks -- email, Facebook or the like -- finding out the who, what and how of the crime can be boiled down to numbers.

"You're really trying to find the minimum set of connectors that connect these people to the various [network] resources," he said. "The minimum number of people required is what's most likely to be the smoking gun. You can do it with math, once you know what the networks are."

Fraud and the Steiner tree, by the numbers

In their article, Patterson and his colleagues explored how networks such as phone calls, business partnerships and family relationships are used to form essential relationships in a fraud investigation. When these same relationships are layered, a pattern of connection becomes obvious. Once unnecessary links are removed and false leads are extracted, the remaining connections are most likely the best suspects. Patterson says that finding the shortest connection between the criminals and the crime is the crux of the Steiner tree.

"All of these things that we see in life, behind them is a mathematical representation," said Patterson. "There are many, many different algorithms that we can pull off a shelf and apply to real-life problems."

A potential tool for the long arm of the law?

Patterson says that with the amount of work that could potentially go into investigating a fraud case, such as obtaining warrants for phone or email records, and identifying and interviewing potential suspects, developing a program that uses a Steiner tree algorithm may save a significant portion of investigators' time -- time that, he says, could likely be reallocated to backlog or cold case files. "If you can reduce your legwork by even 20 per cent, that has massive manpower implications. I think algorithms like this one could help you reduce your legwork a lot more than that," he said.

Although there is software that police and other law enforcement agencies can use to solve fraud, Patterson sees no evidence that those programs use a Steiner tree algorithm, something he says would bring some structure to an unstructured area. He hopes programmers and investigators will take note of the findings and make changes to their practices.

"It might take several years or many years before anyone picks it up," said Patterson. "But it's a good thing if we can point people towards what's useful."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Alberta, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Ram D Gopal, Raymond A Patterson, Erik Rolland, Dmitry Zhdanov. Social network meets Sherlock Holmes: investigating the missing links of fraud. Computer Fraud & Security, 2012; 2012 (7): 12 DOI: 10.1016/S1361-3723(12)70074-X

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, September 21, 2012

Thwarting the cleverest attackers: Even most secure-seeming computer is shockingly vulnerable to attack

ScienceDaily (May 1, 2012) — Savvy hackers can steal a computer's secrets by timing its data storage transactions or measuring its power use. New research shows how to stop them.

In the last 10 years, cryptography researchers have demonstrated that even the most secure-seeming computer is shockingly vulnerable to attack. The time it takes a computer to store data in memory, fluctuations in its power consumption and even the noises it emits can betray information to a savvy assailant.

Attacks that use such indirect sources of information are called side-channel attacks, and the increasing popularity of cloud computing makes them an even greater threat. An attacker would have to be pretty motivated to install a device in your wall to measure your computer's power consumption. But it's comparatively easy to load a bit of code on a server in the cloud and eavesdrop on other applications it's running.

Fortunately, even as they've been researching side-channel attacks, cryptographers have also been investigating ways of stopping them. Shafi Goldwasser, the RSA Professor of Electrical Engineering and Computer Science at MIT, and her former student Guy Rothblum, who's now a researcher at Microsoft Research, recently posted a long report

on the website of the Electronic Colloquium on Computational Complexity, describing a general approach to mitigating side-channel attacks. At the Association for Computing Machinery's Symposium on Theory of Computing (STOC) in May, Goldwasser and colleagues will present a paper demonstrating how the technique she developed with Rothblum can be adapted to protect information processed on web servers.

In addition to preventing attacks on private information, Goldwasser says, the technique could also protect devices that use proprietary algorithms so that they can't be reverse-engineered by pirates or market competitors -- an application that she, Rothblum and others described at last year's AsiaCrypt conference.

Today, when a personal computer is in use, it's usually running multiple programs -- say, a word processor, a browser, a PDF viewer, maybe an email program or a spreadsheet program. All the programs are storing data in memory, but the laptop's operating system won't let any program look at the data stored by any other. The operating systems running on servers in the cloud are no different, but a malicious program could launch a side-channel attack simply by sending its own data to memory over and over again. From the time the data storage and retrieval takes, it can infer what the other programs are doing with remarkable accuracy.

Goldwasser and Rothblum's technique obscures the computational details of a program, whether it's running on a laptop or a server. Their system converts a given computation into a sequence of smaller computational modules. Data fed into the first module is encrypted, and at no point during the module's execution is it decrypted. The still-encrypted output of the first module is fed into the second module, which encrypts it in yet a different way, and so on.

The encryption schemes and the modules are devised so that the output of the final module is exactly the output of the original computation. But the operations performed by the individual modules are entirely different. A side-channel attacker could extract information about how the data in any given module is encrypted, but that won't let him deduce what the sequence of modules do as a whole. "The adversary can take measurements of each module," Goldwasser says, "but they can't learn anything more than they could from a black box."

The report by Goldwasser and Rothblum describes a type of compiler, a program that takes code written in a form intelligible to humans and converts it into the low-level instruction intelligible to a computer. There, the computational modules are an abstraction: The instruction that inaugurates a new module looks no different from the instruction that concluded the last one. But in the STOC paper, the modules are executed on different servers on a network.

According to Nigel Smart, a professor of cryptology in the computer science department at the University of Bristol in England, the danger of side-channel attacks "has been known since the late '90s."

"There's a lot of engineering that was done to try to prevent this from being a problem," Smart says, "a huge amount of engineering work. This is a megabucks industry." Much of that work, however, has relied on trial and error, Smart says. Goldwasser and Rothblum's study, on the other hand, "is a much more foundational study, looking at really foundational, deep questions about what is possible."

Moreover, Smart says, previous work on side-channel attacks tended to focus on the threat posed to handheld devices, such as cellphones and smart cards. "It would seem to me that the stuff that is more likely to take off in the long run is the stuff that's talking about servers," Smart says. "I don't know anyone else outside MIT who's looking at that."

Smart cautions, however, that the work of Goldwasser and her colleagues is unlikely to yield practical applications in the near future. "In security, and especially cryptography, it takes a long time to go from an academic idea to something that's actually used in the real world," Smart says. "They're looking at what could be possible in 10, 20 years' time."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, September 20, 2012

Cyber security risk to smart grids and intelligent buildings

ScienceDaily (Aug. 13, 2012) — Building owners and designers, and particularly members of the building services industry, are racing to implement intelligent buildings and smart grids, which are widely heralded as a boon in terms of both energy efficiency and facilities management. But many are overlooking the potential risk of malicious attacks on these highly networked control systems.

Writing in the latest issue of the journal Intelligent Buildings International, David Fisk of the Laing O'Rourke Centre for Systems Engineering and Innovation at Imperial College London warns that, as we have seen with the humble PC, the basic building blocks of intelligent buildings -- the process controllers that make up the distributed building management system (BMS) -- can be infected by malware, often through a 'backdoor' left ajar on a trusted network.

David Fisk notes that: "… the basic system -- for example, the bare minimum standby generators -- should normally be independent of the intelligent-building software (much as a warship still carries a sextant should the GPS be jammed)." And he warns:

"This is not current practice as far as can be discerned from existing ASHRAE and CIBSE standards."

Fisk's article, 'Cyber security, building automation, and the intelligent building' begins with a short history of the rise in intelligent control -- from the 1960s, when the only real threat was an irate engineer armed with a hammer, through the movement away from bespoke hardware and software to proprietary software such as the ubiquitous Windows system during the 1980s, to the post-9/11 emergence of the anonymous cyber-aggressor.

The middle section of the article then presents a review of a more recent attack, now known as Stuxnet, which demonstrated the wide-ranging havoc that could be caused by malicious software infecting plant controllers. This section also explains how such attacks now present a threat to the 'smart grid' and other open systems.

Finally, the article discusses how risks may be assessed and mitigated, using a hypothetical attack on the heating, ventilation and air-conditioning (HVAC) systems of a super-casino to illustrate the urgent need for the building systems design community to re-think traditional security strategies. As a minimum, building services professionals should deploy a 'whole-system design approach' and owners should plan for periods during which 'intelligence' is not available.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Taylor & Francis, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

David Fisk. Cyber security, building automation, and the intelligent building. Intelligent Buildings International, 2012; : 1 DOI: 10.1080/17508975.2012.695277

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Researchers make quantum processor capable of factoring a composite number into prime factors

ScienceDaily (Aug. 19, 2012) — Computing prime factors may sound like an elementary math problem, but try it with a large number, say one that contains more than 600 digits, and the task becomes enormously challenging and impossibly time-consuming. Now, a group of researchers at UC Santa Barbara has designed and fabricated a quantum processor capable of factoring a composite number -- in this case the number 15 -- into its constituent prime factors, 3 and 5.

Although modest compared to a 600-digit number, the achievement represents a milestone on the road map to building a quantum computer capable of factoring much larger numbers, with significant implications for cryptography and cybersecurity. The results are published in the advance online issue of the journal Nature Physics.

"Fifteen is a small number, but what's important is we've shown that we can run a version of Peter Shor's prime factoring algorithm on a solid state quantum processor. This is really exciting and has never been done before," said Erik Lucero, the paper's lead author. Now a postdoctoral researcher in experimental quantum computing at IBM, Lucero was a doctoral student in physics at UCSB when the research was conducted and the paper was written.

"What is important is that the concepts used in factoring this small number remain the same when factoring much larger numbers," said Andrew Cleland, a professor of physics at UCSB and a collaborator on the experiment. "We just need to scale up the size of this processor to something much larger. This won't be easy, but the path forward is clear."

Practical applications motivated the research, according to Lucero, who explained that factoring very large numbers is at the heart of cybersecurity protocols, such as the most common form of encoding, known as RSA encryption. "Anytime you send a secure transmission -- like your credit card information -- you are relying on security that is based on the fact that it's really hard to find the prime factors of large numbers," he said. Using a classical computer and the best-known classical algorithm, factoring something like RSA Laboratory's largest published number -- which contains over 600 decimal digits -- would take longer than the age of the universe, he continued.

A quantum computer could reduce this wait time to a few tens of minutes. "A quantum computer can solve this problem faster than a classical computer by about 15 orders of magnitude," said Lucero. "This has widespread effect. A quantum computer will be a game changer in a lot of ways, and certainly with respect to computer security."

So, if quantum computing makes RSA encryption no longer secure, what will replace it? The answer, Lucero said, is quantum cryptography. "It's not only harder to break, but it allows you to know if someone has been eavesdropping, or listening in on your transmission. Imagine someone wiretapping your phone, but now, every time that person tries to listen in on your conversation, the audio gets jumbled. With quantum cryptography, if someone tries to extract information, it changes the system, and both the transmitter and the receiver are aware of it."

To conduct the research, Lucero and his colleagues designed and fabricated a quantum processor to map the problem of factoring the number 15 onto a purpose-built superconducting quantum circuit. "We chose the number 15 because it is the smallest composite number that satisfies the conditions appropriate to test Shor's algorithm -- it is a product of two prime numbers, and it's not even," he explained.

The quantum processor was implemented using a quantum circuit composed of four superconducting phase qubits -- the quantum equivalents of transistors -- and five microwave resonators. The complexity of operating these nine quantum elements required building a control system that allows for precise operation and a significant degree of automation -- a prototype that will facilitate scaling up to larger and more complex circuits. The research represents a significant step toward a scalable quantum architecture while meeting a benchmark for quantum computation, as well as having historical relevance for quantum information and cryptography.

"After repeating the experiment 150,000 times, we showed that our quantum processor got the right answer just under half the time" Lucero said. "The best we can expect from Shor's algorithm is to get the right answer exactly 50 percent of the time, so our results were essentially what we'd expect theoretically."

The next step, according to Lucero, is to increase the quantum coherence times and go from nine quantum elements to hundreds, then thousands, and on to millions. "Now that we know 15=3x5, we can start thinking about how to factor larger -- dare I say -- more practical numbers," he said.

Other UCSB researchers participating in the study include John Martinis, professor of physics; Rami Barends, Yu Chen, Matteo Mariantoni, and Y. Yin, postdoctoral fellows in physics; and physics graduate students Julian Kelly, Anthony Megrant, Peter O'Malley, Daniel Sank, Amit Vainsencher, Jim Wenner, and Ted White.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Santa Barbara.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Erik Lucero, R. Barends, Y. Chen, J. Kelly, M. Mariantoni, A. Megrant, P. O’Malley, D. Sank, A. Vainsencher, J. Wenner, T. White, Y. Yin, A. N. Cleland & John M. Martinis. Computing prime factors with a Josephson phase qubit quantum processor. Nature Physics, 19 August 2012 DOI: 10.1038/nphys2385

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, September 19, 2012

Information overload in the era of 'big data'

ScienceDaily (Aug. 20, 2012) — Botany is plagued by the same problem as the rest of science and society: our ability to generate data quickly and cheaply is surpassing our ability to access and analyze it. In this age of big data, scientists facing too much information rely on computers to search large data sets for patterns that are beyond the capability of humans to recognize -- but computers can only interpret data based on the strict set of rules in their programming.

New tools called ontologies provide the rules computers need to transform information into knowledge, by attaching meaning to data, thereby making those data retrievable by computers and more understandable to human beings. Ontology, from the Greek word for the study of being or existence, traditionally falls within the purview of philosophy, but the term is now used by computer and information scientists to describe a strategy for representing knowledge in a consistent fashion. An ontology in this contemporary sense is a description of the types of entities within a given domain and the relationships among them.

A new article in this month's American Journal of Botany by Ramona Walls (New York Botanical Garden) and colleagues describes how scientists build ontologies such as the Plant Ontology (PO) and how these tools can transform plant science by facilitating new ways of gathering and exploring data.

When data from many divergent sources, such as data about some specific plant organ, are associated or "tagged" with particular terms from a single ontology or set of interrelated ontologies, the data become easier to find, and computers can use the logical relationships in the ontologies to correctly combine the information from the different databases. Moreover, computers can also use ontologies to aggregate data associated with the different subclasses or parts of entities.

For example, suppose a researcher is searching online for all examples of gene expression in a leaf. Any botanist performing this search would include experiments that described gene expression in petioles and midribs or in a frond. However, a search engine would not know that it needs to include these terms in its search -- unless it was told that a frond is a type of leaf, and that every petiole and every midrib are parts of some leaf. It is this information that ontologies provide.

The article in the American Journal of Botany by Walls and colleagues describes what ontologies are, why they are relevant to plant science, and some of the basic principles of ontology development. It includes an overview of the ontologies that are relevant to botany, with a more detailed description of the PO and the challenges of building an ontology that covers all green plants. The article also describes four keys areas of plant science that could benefit from the use of ontologies: (1) comparative genetics, genomics, phenomics, and development; (2) taxonomy and systematics; (3) semantic applications; and (4) education. Although most of the examples in this article are drawn from plant science, the principles could apply to any group of organisms, and the article should be of interest to zoologists as well.

As genomic and phenomic data become available for more species, many different research groups are embarking on the annotation of their data and images with ontology terms. At the same time, cross-species queries are becoming more common, causing more researchers in plant science to turn to ontologies. Ontology developers are working with the scientists who generate data to make sure ontologies accurately reflect current science, and with database developers and publishers to find ways to make it easier for scientist to associate their data with ontologies.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by American Journal of Botany, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

R. L. Walls, B. Athreya, L. Cooper, J. Elser, M. A. Gandolfo, P. Jaiswal, C. J. Mungall, J. Preece, S. Rensing, B. Smith, D. W. Stevenson. Ontologies as integrative tools for plant science. American Journal of Botany, 2012; 99 (8): 1263 DOI: 10.3732/ajb.1200222

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Search technology that can gauge opinion and predict the future

ScienceDaily (Aug. 16, 2012) — Inspired by a system for categorising books proposed by an Indian librarian more than 50 years ago, a team of EU-funded researchers have developed a new kind of internet search that takes into account factors such as opinion, bias, context, time and location. The new technology, which could soon be in use commercially, can display trends in public opinion about a topic, company or person over time -- and it can even be used to predict the future.

'Do a search for the word "climate" on Google or another search engine and what you will get back is basically a list of results featuring that word: there's no categorisation, no specific order, no context. Current search engines do not take into account the dimensions of diversity: factors such as when the information was published, if there is a bias toward one opinion or another inherent in the content and structure, who published it and when,' explains Fausto Giunchiglia, a professor of computer science at the University of Trento in Italy.

But can search technology be made to identify and embrace diversity? Can a search engine tell you, for example, how public opinion about climate change has changed over the last decade? Or how hot the weather will be a century from now, by aggregating current and past estimates from different sources?

It seems that it can, thanks to a pioneering combination of modern science and a decades-old classification method, brought together by European researchers in the LivingKnowledge (1) project. Supported by EUR 4.8 million in funding from the European Commission, the LivingKnowledge team, coordinated by Prof. Giunchiglia, adopted a multidisciplinary approach to developing new search technology, drawing on fields as diverse as computer science, social science, semiotics and library science.

Indeed, the so-called father of library science, Sirkali Ramamrita Ranganathan, an Indian librarian, served as a source of inspiration for the researchers. In the 1920s and 1930s, Ranganathan developed the first major analytico-synthetic, or faceted, classification system. Using this approach, objects -- books, in the case of Ranganathan; web and database content, in the case of the LivingKnowlege team -- are assigned multiple characteristics and attributes (facets), enabling the classification to be ordered in multiple ways, rather than in a single, predetermined, taxonomic order. Using the system, an article about the effects on agriculture of climate change written in Norway in 1990 might be classified as 'Geography; Climate; Climate change; Agriculture; Research; Norway; 1990.'

In order to understand the classification system better and implement it in search engine technology, the LivingKnowledge researchers turned to the Indian Statistical Institute, a project partner, which uses faceted classification on a daily basis.

'Using their knowledge we were able to turn Ranganathan's pseudo-algorithm into a computer algorithm and the computer scientists were able to use it to mine data from the web, extract its meaning and context, assign facets to it, and use these to structure the information based on the dimensions of diversity,' Prof. Giunchiglia says.

Researchers at the University of Pavia in Italy, another partner, drew on their expertise in extracting meaning from web content -- not just from text and multimedia content, but also from the way the information is structured and laid out -- in order to infer bias and opinions, adding another facet to the data.

'We are able to identify the bias of authors on a certain subject and whether their opinions are positive or negative,' the LivingKnowledge coordinator says. 'Facts are facts, but any information about an event, or on any subject, is often surrounded by opinions and bias.'

From libraries of the 1930s to space travel in 2034...

The technology was implemented in a testbed, now available as open source software, and used for trials based around two intriguing application scenarios.

Working with Austrian social research institute SORA, the team used the LivingKnowledge system to identify social trends and monitor public opinion in both quantitative and qualitative terms. Used for media content analysis, the system could help a company understand the impact of a new advertising campaign, showing how it has affected brand recognition over time and which social groups have been most receptive. Alternatively, a government might use the system to gauge public opinion about a new policy, or a politician could use it to respond in the most publicly acceptable way to a rival candidate's claims.

With Barcelona Media, a non-profit research foundation supported by Yahoo!, and with the Netherlands-based Internet Memory Foundation, the LivingKnowledge team looked not only at current and past trends, but extrapolated them and drew on forecasts extracted from existing data to try to predict the future. Their Future Predictor application is able to make searches based on questions such as 'What will oil prices be in 2050?' or 'How much will global temperatures rise over the next 100 years?' and find relevant information and forecasts from today's web. For example, a search for the year 2034 turns up 'space travel' as the most relevant topic indexed in today's news.

'More immediately, this application scenario provides functionality for detecting trends even before these trends become apparent in daily events -- based on integrated search and navigation capabilities for finding diverse, multi-dimensional information depending on content, bias and time,' Prof. Giunchiglia explains.

Several of the project partners have plans to implement the technology commercially, and the project coordinator intends to set up a non-profit foundation to build on the LivingKnowledge results at a time when demand for this sort of technology is only likely to increase.

As Prof. Giunchiglia points out, Google fundamentally changed the world by providing everyone with access to much of the world's information, but it did it for people: currently only humans can understand the meaning of all that data, so much so that information overload is a common problem. As we move into a 'big data' age in which information about everything and anything is available at the touch of a button, the meaning of that information needs to be understandable not just by humans but also by machines, so quantity must come combined with quality. The LivingKnowledge approach addresses that problem.

'When we started the project, no one was talking about big data. Now everyone is and there is increasing interest in this sort of technology,' Prof. Giunchiglia says. 'The future will be all about big data -- we can't say whether it will be good or bad, but it will certainly be different.'

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by CORDIS Features, formerly ICT Results.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, September 18, 2012

Biologists create first predictive computational model of gene networks that control development of sea-urchin embryos

ScienceDaily (Aug. 29, 2012) — As an animal develops from an embryo, its cells take diverse paths, eventually forming different body parts -- muscles, bones, heart. In order for each cell to know what to do during development, it follows a genetic blueprint, which consists of complex webs of interacting genes called gene regulatory networks.

Biologists at the California Institute of Technology (Caltech) have spent the last decade or so detailing how these gene networks control development in sea-urchin embryos. Now, for the first time, they have built a computational model of one of these networks.

This model, the scientists say, does a remarkably good job of calculating what these networks do to control the fates of different cells in the early stages of sea-urchin development -- confirming that the interactions among a few dozen genes suffice to tell an embryo how to start the development of different body parts in their respective spatial locations. The model is also a powerful tool for understanding gene regulatory networks in a way not previously possible, allowing scientists to better study the genetic bases of both development and evolution.

"We have never had the opportunity to explore the significance of these networks before," says Eric Davidson, the Norman Chandler Professor of Cell Biology at Caltech. "The results are amazing to us."

The researchers described their computer model in a paper in the Proceedings of the National Academy of Sciences that appeared as an advance online publication on August 27.

The model encompasses the gene regulatory network that controls the first 30 hours of the development of endomesoderm cells, which eventually form the embryo's gut, skeleton, muscles, and immune system. This network -- so far the most extensively analyzed developmental gene regulatory network of any animal organism -- consists of about 50 regulatory genes that turn one another on and off.

To create the model, the researchers distilled everything they knew about the network into a series of logical statements that a computer could understand. "We translated all of our biological knowledge into very simple Boolean statements," explains Isabelle Peter, a senior research fellow and the first author of the paper. In other words, the researchers represented the network as a series of if-then statements that determine whether certain genes in different cells are on or off (i.e., if gene A is on, then genes B and C will turn off).

By computing the results of each sequence hour by hour, the model determines when and where in the embryo each gene is on and off. Comparing the computed results with experiments, the researchers found that the model reproduced the data almost exactly. "It works surprisingly well," Peter says.

Some details about the network may still be uncovered, the researchers say, but the fact that the model mirrors a real embryo so well shows that biologists have indeed identified almost all of the genes that are necessary to control these particular developmental processes. The model is accurate enough that the researchers can tweak specific parts -- for example, suppress a particular gene -- and get computed results that match those of previous experiments.

Allowing biologists to do these kinds of virtual experiments is precisely how computer models can be powerful tools, Peter says. Gene regulatory networks are so complex that it is almost impossible for a person to fully understand the role of each gene without the help of a computational model, which can reveal how the networks function in unprecedented detail.

Studying gene regulatory networks with models may also offer new insights into the evolutionary origins of species. By comparing the gene regulatory networks of different species, biologists can probe how they branched off from common ancestors at the genetic level.

So far, the researchers have only modeled one gene regulatory network, but their goal is to model the networks responsible for every part of a sea-urchin embryo, to build a model that covers not just the first 30 hours of a sea urchin's life but its entire embryonic development. Now that this modeling approach has been proven effective, Davidson says, creating a complete model is just a matter of time, effort, and resources.

The title of the PNAS paper is "Predictive computation of genomic logic processing functions in embryonic development." In addition to Peter and Davidson, the other author on the PNAS paper is Emmanuel Faure, a former Caltech postdoctoral scholar who is now at the École Polytechnique in France. This work was supported by the National Institute of Child Health and Human Development.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by California Institute of Technology. The original article was written by Marcus Woo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

I. S. Peter, E. Faure, E. H. Davidson. Predictive computation of genomic logic processing functions in embryonic development. Proceedings of the National Academy of Sciences, 2012; DOI: 10.1073/pnas.1207852109

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, September 17, 2012

Computer scientists show what makes movie lines memorable

ScienceDaily (May 8, 2012) — Whether it's a line from a movie, an advertising slogan or a politician's catchphrase, some statements take hold in people's minds better than others. But why?

Cornell researchers who applied computer analysis to a database of movie scripts think they may have found the secret of what makes a line memorable.

The study suggests that memorable lines use familiar sentence structure but incorporate distinctive words or phrases, and they make general statements that could apply elsewhere. The latter may explain why lines such as, "You're gonna need a bigger boat" or "These aren't the droids you're looking for" (accompanied by a hand gesture) have become standing jokes. You can use them in a different context and apply the line to your own situation.

While the analysis was based on movie quotes, it could have applications in marketing, politics, entertainment and social media, the researchers said.

"Using movie scripts allowed us to study just the language, without other factors. We needed a way of asking a question just about the language, and the movies make a very nice dataset," said graduate student Cristian Danescu-Niculescu-Mizil, first author of a paper to be presented at the 50th Annual Meeting of the Association for Computational Linguistics July 8-14 in Jeju, South Korea.

The study grows out of ongoing work on how ideas travel across networks.

"We've been looking at things like who talks to whom," said Jon Kleinberg, a professor of computer science who worked on the study, "but we hadn't explored how the language in which an idea was presented might have an effect."

To address that, they collaborated with Lillian Lee, a professor of computer science who specializes in computer processing of natural human language.

They obtained scripts from about 1,000 movies, and a database of memorable quotes from those movies from the Internet Movie Database. Each quote was paired with another from the movie's script, spoken by the same character in the same scene and about the same length, to eliminate every factor except the language itself. Obi-Wan Kenobi, for example, also said, "You don't need to see his identification," but you don't hear that a lot.

They asked a group of people who had not seen the movies to choose which quote in the pairs was most memorable. Two patterns emerged to identify the memorable choice: distinctiveness and generality.

Then the researchers programmed a computer with linguistic rules reflecting these concepts. A line will be less general if it contains third-person pronouns and definite articles (which refer to people, objects or events in the scene) and uses past tense (usually referring to something that happened previously in the story). Distinctive language can be identified by comparison with a database of news stories. The computer was able to choose the memorable quote an average of 64 percent of the time.

Later analysis also found subtle differences in sound and word choice: Memorable quotes use more sounds made in the front of the mouth, words with more syllables and fewer coordinating conjunctions.

In a further test, the researchers found that the same rules applied to popular advertising slogans.

Although teaching a computer how to write memorable dialogue is probably a long way off, applications might be developed to monitor the work of human writers and evaluate it in progress, Kleinberg suggested.

The researchers have set up a website where you can test your skill at identifying memorable movie quotes, and perhaps contribute some data to the research, at www.cs.cornell.edu/~cristian/memorability.html.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Cornell University. The original article was written by Bill Steele.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, September 15, 2012

Mini-camera with maxi-brainpower

ScienceDaily (Aug. 23, 2012) — Torrential rapids, plunging mud holes and soaring hurdles: in the outdoor competitions at the Olympic Games, athletes pushed themselves to the limit. But it's hard to depict this in pictures alone. This is why researchers at the Fraunhofer Institute for Integrated Circuits IIS created an intelligent camera that instantly delivers additional metadata, such as acceleration, temperature or heart rate. The new INCA can be seen at the IBC trade show in Amsterdam from September 7 -- 11.

Just a few more meters to the finish line. The mountain biker jumps over the last hill and takes the final curve, with the rest of the competition close at his heels. At such moments, you do not want to just watch, you would really love to put yourself in the same shoes as the athlete. How does he push the pace on the final stretch? How fast is his pulse racing? What does he feel like? Viewers will soon be able to obtain this information in real time, directly with the images. Because the INCA intelligent camera, engineered by Fraunhofer researchers in Erlangen, makes completely new fields of application and perspectives possible.

INCA not only renders images in HD broadcasting quality, it is also equipped with a diversity of sensors that provide data on GPS position, acceleration, temperature and air pressure. In addition, the camera can be seamlessly connected to external systems via Bluetooth or WLAN: for instance, a chest harness to track heart rate, or face recognition software that can open up completely new perspectives. This way, viewers may be able to catch even a small glimpse into the emotional life of the athletes. In addition, the camera can also be combined with object recognition and voice detection systems.

Armed for any eventuality

Despite its tiny size (2x2x8 cm), the miniature camera is powerful enough to handle professional film and TV productions, thanks to its high performance capacity and minimal power consumption. It is best suited to extreme situations, because INCA resists sand and dust, withstands cold and debris, and can be readily installed as a helmet camera. In addition to athletic and event broadcasts, other potential areas of application include animal movies and nature shows, as well as expeditions and adventures, where such additional data can provide invaluable information. The camera analyzes data and by doing so, enables the user to experience and record more about his or her environment while filming.

Since the camera system is based on the Android operating system, by playing an app, it can be easily and flexibly adapted to the requirements of the respective subject matter. INCA possesses enough computer power to execute complex algorithms as well. As a result, it can correct objective errors and compress HD videos in real time.

During its development, these issues posed major challenges to the scientists, as group manager Wolfgang Thieme of Fraunhofer IIS explains: "The core issue was figuring out how to house such a massive range of functionality within the tightest space. The OMAP processor (Open Multimedia Applications Platform) makes all of this possible. As the heart of the camera, this is comparable to a CPU that you find in any ordinary PC. The difference is that additional function blocks for various tasks have been integrated into the OMAP. Without these blocks, the system would neither record HD video images nor process and issue them in real time. The most difficult task was programming these blocks and using them for data processing."

This smart camera is not yet on the market.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Fraunhofer-Gesellschaft.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here