Google Search

Thursday, January 31, 2013

Video analysis: Detecting text every which way

Jan. 3, 2013 — Software that detects and extracts text from within video frames, making it searchable, is set to make a vast resource even more valuable.

As video recording technology improves in performance and falls in price, ever-more events are being captured within video files. If all of this footage could be searched effectively, it would represent an invaluable information repository. One option to help catalogue large video databases is to extract text, such as street signs or building names, from the background of each recording. Now, a method that automates this process has been developed by a research team at the National University of Singapore, which also included Shijian Lu at the A*STAR Institute for Infocomm Research.

Previous research into automated text detection within images has focused mostly on document analysis. Recognizing background text within the complex scenes typically captured by video is a much greater challenge: it can come in any shape or size, be partly occluded by other objects, or be oriented in any direction.

The multi-step method for automating text recognition developed by Lu and co-workers overcomes these challenges, particularly the difficulties associated with multi-oriented text. Their method first processes video frames using 'masks' that enhance the contrast between text and background. The researchers developed a process to combine the output of two known masks to enhance text pixels without generating image noise. From the contrast-enhanced image, their method then searches for characters of text using an algorithm called a Bayesian classifier, which employs probabilistic models to detect the edges of each text character.

Even after identifying all characters in an image, a key challenge remains, explains Lu. The software must detect how each character relates to its neighbors to form lines of text -- which might run in any orientation within the captured scene. Lu and his co-workers overcame this problem using a so-called 'boundary growing' approach. The software starts with one character and then scans its surroundings for nearby characters, growing the text box until the end of the line of text is found. Finally, the software eliminates false-positive results by checking that identified 'text boxes' conform to certain geometric rules.

Tests using sample video frames confirmed that the new method is the best yet at identifying video text, especially for text not oriented horizontally within the image, says Lu. However, there is still room for refinement, such as adapting the method to identify text not written in straight lines. "Document analysis methods achieve more than 90% character recognition," Lu adds. "The current state-of-the-art for video text is around 67-75%. There is a demand for improved accuracy."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Palaiahnakote Shivakumara, Rushi Padhuman Sreedhar, Trung Quy Phan, Shijian Lu, Chew Lim Tan. Multioriented Video Scene Text Detection Through Bayesian Classification and Boundary Growing. IEEE Transactions on Circuits and Systems for Video Technology, 2012; 22 (8): 1227 DOI: 10.1109/TCSVT.2012.2198129

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, January 30, 2013

Common data determinants of recurrent cancer are broken, mislead researchers

Jan. 2, 2013 — In order to study the effectiveness or cost effectiveness of treatments for recurrent cancer, you first have to discover the patients in medical databases who have recurrent cancer. Generally studies do this with billing or treatment codes -- certain codes should identify who does and does not have recurrent cancer. A recent study published in the journal Medical Care shows that the commonly used data determinants of recurrent cancer may be misidentifying patients and potentially leading researchers astray.

"For example, a study might look in a database for all patients who had chemotherapy and then another round of chemotherapy more than six months after the first, imagining that a second round defines recurrent disease. Or a study might look in a database for all patients with a newly discovered secondary tumor, imagining that all patients with a secondary tumor have recurrent disease. Our study shows that both methods are leave substantial room for improvement," says Debra Ritzwoller, PhD, health economist at the Kaiser Permanente Colorado Institute for Health Research and investigator at the University of Colorado Cancer Center.

The study used two unique datasets derived from HMO/Cancer Research Network and CanCORS/Medicare to check if the widely used algorithms in fact discovered the patients with recurrent disease that the algorithms were designed to detect. They did not. For example, a newly diagnosed secondary cancer may not mark a recurrence but may instead be a new cancer entirely; a second, later round of chemotherapy may be needed for continuing control of the de novo cancer, and not to treat recurrence.

"Basically, these algorithms don't work for all cancer sites in many datasets commonly used for cancer research," says Ritzwoller.

For example, to discover recurrent prostate cancer, no combination of billing codes used in this large data set pointed with sensitivity and specificity to patients whom notes in the data showed had recurrent disease. The highest success of the widely used algorithms was predicting patients with recurrent lung, colorectal and breast cancer, with success rates only between 75 and 85 percent.

"We need to know who in these data sets has recurrent disease. Then we can do things like look at which treatments lead to which outcomes," Ritzwoller says. Matching patients to outcomes can help to decide who gets what treatment, and can help optimize costs in health care systems.

In a forthcoming paper, Ritzwoller and colleagues will suggest algorithms to replace these that have now proved inadequate.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Colorado Denver. The original article was written by Garth Sundem.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Michael J. Hassett, Debra P. Ritzwoller, Nathan Taback, Nikki Carroll, Angel M. Cronin, Gladys V. Ting, Deb Schrag, Joan L. Warren, Mark C. Hornbrook, Jane C. Weeks. Validating Billing/Encounter Codes as Indicators of Lung, Colorectal, Breast, and Prostate Cancer Recurrence Using 2 Large Contemporary Cohorts. Medical Care, 2012; : 1 DOI: 10.1097/MLR.0b013e318277eb6f

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Surgeons may use hand gestures to manipulate MRI images in OR

Jan. 10, 2013 — Doctors may soon be using a system in the operating room that recognizes hand gestures as commands to tell a computer to browse and display medical images of the patient during a surgery.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the procedure and increase the risk of spreading infection-causing bacteria, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.

"One of the most ubiquitous pieces of equipment in U.S. surgical units is the computer workstation, which allows access to medical images before and during surgery," he said. "However, computers and their peripherals are difficult to sterilize, and keyboards and mice have been found to be a source of contamination. Also, when nurses or assistants operate the keyboard for the surgeon, the process of conveying information accurately has proven cumbersome and inefficient since spoken dialogue can be time-consuming and leads to frustration and delays in the surgery."

Researchers are creating a system that uses depth-sensing cameras and specialized algorithms to recognize hand gestures as commands to manipulate MRI images on a large display. Recent research to develop the algorithms has been led by doctoral student Mithun George Jacob.

Findings from the research were detailed in a paper published in December in the Journal of the American Medical Informatics Association. The paper was written by Jacob, Wachs and Rebecca A. Packer, an associate professor of neurology and neurosurgery in Purdue's College of Veterinary Medicine.

The researchers validated the system, working with veterinary surgeons to collect a set of gestures natural for clinicians and surgeons. The surgeons were asked to specify functions they perform with MRI images in typical surgeries and to suggest gestures for commands. Ten gestures were chosen: rotate clockwise and counterclockwise; browse left and right; up and down; increase and decrease brightness; and zoom in and out.

Critical to the system's accuracy is the use of "contextual information" in the operating room -- cameras observe the surgeon's torso and head -- to determine and continuously monitor what the surgeon wants to do.

"A major challenge is to endow computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures," Wachs said. "Surgeons will make many gestures during the course of a surgery to communicate with other doctors and nurses. The main challenge is to create algorithms capable of understanding the difference between these gestures and those specifically intended as commands to browse the image-viewing system. We can determine context by looking at the position of the torso and the orientation of the surgeon's gaze. Based on the direction of the gaze and the torso position we can assess whether the surgeon wants to access medical images."

The hand-gesture recognition system uses a camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera, found in consumer electronics games that can track a person's hands, maps the surgeon's body in 3-D. Findings showed that integrating context allows the algorithms to accurately distinguish image-browsing commands from unrelated gestures, reducing false positives from 20.8 percent to 2.3 percent.

"If you are getting false alarms 20 percent of the time, that's a big drawback," Wachs said. "So we've been able to greatly improve accuracy in distinguishing commands from other gestures."

The system also has been shown to have a mean accuracy of about 93 percent in translating gestures into specific commands, such as rotating and browsing images.

The algorithm takes into account what phase the surgery is in, which aids in determining the proper context for interpreting the gestures and reducing the browsing time.

"By observing the progress of the surgery we can tell what is the most likely image the surgeon will want to see next," Wachs said.

The researchers also are exploring context using a mock brain biopsy needle that can be tracked in the brain.

"The needle's location provides context, allowing the system to anticipate which images the surgeon will need to see next and reducing the number of gestures needed," Wachs said. "So instead of taking five minutes to browse, the surgeon gets there faster."

Sensors in the surgical needle reveal the position of its tip.

The research was supported by the Agency for Healthcare Research and Quality, grant number R03HS019837.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Purdue University. The original article was written by Emil Venere.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

M. G. Jacob, J. P. Wachs, R. A. Packer. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images. Journal of the American Medical Informatics Association, 2012; DOI: 10.1136/amiajnl-2012-001212

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, January 29, 2013

Solving puzzles without a picture: New algorithm assembles chromosomes from next generation sequencing data

Jan. 10, 2013 — One of the most difficult problems in the field of genomics is assembling relatively short "reads" of DNA into complete chromosomes. In a new paper published in Proceedings of the National Academy of Sciences an interdisciplinary group of genome and computer scientists has solved this problem, creating an algorithm that can rapidly create "virtual chromosomes" with no prior information about how the genome is organized.

The powerful DNA sequencing methods developed about 15 years ago, known as next generation sequencing (NGS) technologies, create thousands of short fragments. In species whose genetics has already been extensively studied, existing information can be used to organize and order the NGS fragments, rather like using a sketch of the complete picture as a guide to a jigsaw puzzle. But as genome scientists push into less-studied species, it becomes more difficult to finish the puzzle.

To solve this problem, a team led by Harris Lewin, distinguished professor of evolution and ecology and vice chancellor for research at the University of California, Davis and Jian Ma, assistant professor at the University of Illinois at Urbana-Champaign created a computer algorithm that uses the known chromosome organization of one or more known species and NGS information from a newly sequenced genome to create virtual chromosomes.

"We show for the first time that chromosomes can be assembled from NGS data without the aid of a preexisting genetic or physical map of the genome," Lewin said.

The new algorithm will be very useful for large-scale sequencing projects such as G10K, an effort to sequence 10,000 vertebrate genomes of which very few have a map, Lewin said.

"As we have shown previously, there is much to learn about phenotypic evolution from understanding how chromosomes are organized in one species relative to other species," he said.

The algorithm is called RACA (for reference-assisted chromosome assembly), co-developed by Jaebum Kim, now at Konkuk University, South Korea, and Denis Larkin of Aberystwyth University, Wales. Kim wrote the software tool which was evaluated using simulated data, standardized reference genome datasets as well as a primary NGS assembly of the newly sequenced Tibetan antelope genome generated by BGI (Shenzhen, China) in collaboration with Professor Ri-Li Ge at Qinghai University, China. Larkin led the experimental validation, in collaboration with scientists at BGI, proving that predictions of chromosome organization were highly accurate.

Ma said that the new RACA algorithm will perform even better as developing NGS technologies produce longer reads of DNA sequence.

"Even with what is expected from the newest generation of sequencers, complete chromosome assemblies will always be a difficult technical issue, especially for complex genomes. RACA predictions address this problem and can be incorporated into current NGS assembly pipelines," Ma said.

Additional coauthors on the paper are Qingle Cai, Asan, Yongfen Zhang, and Guojie Zhang, BGI-Shenzhen, China; Loretta Auvil and Boris Capitanu, University of Illinois Urbana-Champaign.

The work was supported by grants from the National Science Foundation, National Institutes of Health, U.S. Department of Agriculture, National Research Foundation of Korea, Polish Grid Infrastructure Project, National Basic Research Program of China, Program of International S&T Cooperation of China and the National Natural Science Foundation of China.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Davis. The original article was written by Andy Fell.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Jaebum Kim, Denis M. Larkin, Qingle Cai, Asan, Yongfen Zhang, Ri-Li Ge, Loretta Auvil, Boris Capitanu, Guojie Zhang, Harris A. Lewin, and Jian Ma. Reference-assisted chromosome assembly. PNAS, January 10, 2013 DOI: 10.1073/pnas.1220349110

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Software package for all types of imaging

Jan. 23, 2013 — Signal reconstruction algorithms can now be developed more elegantly because scientists at the Max Planck Institute for Astrophysics released a new software package for data analysis and imaging, NIFTY, that is useful for mapping in any number of dimensions or spherical projections without encoding the dimensional information in the algorithm itself. The advantage is that once a special method for image reconstruction has been programmed with NIFTY it can easily be applied to many other applications. Although it was originally developed with astrophysical imaging in mind, NIFTY can also be used in other areas such as medical imaging.

Behind most of the impressive telescopic images that capture events at the depths of the cosmos is a lot of work and computing power. The raw data from many instruments are not vivid enough even for experts to have a chance at understanding what they mean without the use of highly complex imaging algorithms. A simple radio telescope scans the sky and provides long series of numbers. Networks of radio telescopes act as interferometers and measure the spatial vibration modes of the brightness of the sky rather than an image directly. Space-based gamma ray telescopes identify sources by the pattern that is generated by the shadow mask in front of the detectors. There are sophisticated algorithms necessary to generate images from the raw data in all of these examples. The same applies to medical imaging devices, such as computer tomographs and magnetic resonance scanners.

Previously each of these imaging problems needed a special computer program that is adapted to the specifications and geometry of the survey area to be represented. But many of the underlying concepts behind the software are generic and ideally would just be programmed once if only the computer could automatically take care of the geometric details.

With this in mind, the researchers in Garching have developed and now released the software package NIFTY that makes this possible. An algorithm written using NIFTY to solve a problem in one dimension can just as easily be applied, after a minor adjustment, in two or more dimensions or on spherical surfaces. NIFTY handles each situation while correctly accounting for all geometrical quantities. This allows imaging software to be developed much more efficiently because testing can be done quickly in one dimension before application to higher dimensional spaces, and code written for one application can easily be recycled for use in another.

NIFTY stands for "Numerical Information Field Theory." The relatively young field of Information Field Theory aims to provide recipes for optimal mapping, completely exploiting the information and knowledge contained in data. NIFTY now simplifies the programming of such formulas for imaging and data analysis, regardless of whether they come from the information field theory or from somewhere else, by providing a natural language for translating mathematics into software.

The NIFTY software release is accompanied by a publication in which the mathematical principles are illustrated using examples (see Figures 1 & 2). In addition, the researchers provide an extensive online documentation. The versatility of NIFTY has already been demonstrated in an earlier scientific publication on nonlinear signal reconstruction and will certainly be helpful in developing better and more accurate imaging methods in astronomy, medical technology and earth observation.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Max-Planck-Institut für Astrophysik (MPA).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Unique software supports behavioural intervention programs

Jan. 21, 2013 — The internet offers users a cost-effective way of accessing information and advice on any health problem, 24-hours a day. A group of social scientists has taken advantage of this by developing software which enables other researchers to easily create interactive internet-based intervention programmes to support behavioural change. The software, known as LifeGuide, is being used in intervention programmes, for example to quit smoking or manage weight loss.

LifeGuide is a flexible tool that can be used to give tailored health advice, help users make decisions about life choices, and support them in their efforts to maintain long-term change. It has been developed by scientists at the University of Southampton with funding from the Economic and Social Research Council (ESRC). As a measure of its popularity, in the last two years over 1,000 researchers worldwide have registered to use LifeGuide.

"Interventions designed to influence behaviour are a part of many people's daily life, such as personal advice, support and training from professionals, or general information provided by the media. However, advice and support can be costly and may not always be readily available to everyone," says Professor Lucy Yardley who developed LifeGuide with colleagues. "But, the internet can give access to services offering information and advice on many health problems. Services can also be made interactive and individually tailored, and they can be set up to support people with reminders, feedback, action planning and chat rooms."

Despite the advantages of working online, until LifeGuide was introduced, researchers had to programme each internet-based behavioural intervention from scratch. Consequently, development costs were high and systems were not easily modified once programmed.

"LifeGuide is a unique tool that enables researchers with no programming background to create interactive internet-based systems to support behaviour change," Professor Yardley continues. "Researchers don't need to employ special programmers and it can be readily modified to suit many different contexts."

It allows researchers to create and modify two important dimensions of behavioural interventions: providing tailored information and advice; and supporting sustained behaviour. The system also supports evaluation of interventions, such as online questionnaire assessment, and automatic follow-up.

For example, LifeGuide has been used as an application, looking at research on the prescription by doctors of antibiotics. Recently, a public health warning about over-prescription of antibiotics for minor infections was given by England's Chief Medical Officer. It was pointed out that antibiotics are increasingly losing their effectiveness as bacteria adapt and develop resistance.

While the public may feel they need antibiotics to ease infections, doctors have a responsibility to prescribe them only to people with a clear medical need. To ensure more sensible use of antibiotics for respiratory infections, Paul Little, Professor of Primary Care Research in the Faculty of Medicine at the University of Southampton and his colleagues have used LifeGuide to develop and tailor online communication training packages for health professionals in six EU countries. Professor Little says, "Using this software in the project made it easy and inexpensive to adapt our training materials for the different countries we are collaborating with."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Economic and Social Research Council (ESRC).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, January 27, 2013

Robot allows 'remote presence' in programming brain and spine stimulators

Jan. 16, 2013 — With the rapidly expanding use of brain and spinal cord stimulation therapy (neuromodulation), new "remote presence" technologies may help to meet the demand for experts to perform stimulator programming, reports a study in the January issue of Neurosurgery.

The preliminary study by Dr. Ivar Mendez of Queen Elizabeth II Health Sciences Centre in Halifax, Nova Scotia, Canada, supports the feasibility and safety of using a remote presence robot -- called the "RP-7" -- to increase access to specialists qualified to program the brain and spine stimulators used in neuromodulation.

Robot Lets Experts Guide Nurses in Programming Stimulators Dr. Mendez and his group developed the RP-7 as a way of allowing experts to "telementor" nonexpert nurses in programming stimulator devices. Already widely used for Parkinson's disease and severe chronic pain, neuromodulation is being explored for use in other conditions, such as epilepsy, severe depression, and obsessive-compulsive disorder.

In this form of therapy, a small electrode is surgically placed in a precise location in the brain or spine. A mild electrical current is delivered to stimulate that area, with the goal of interrupting abnormal activity. As more patients undergo brain and spine stimulation therapy, there's a growing demand for experts to program the stimulators that generate the electrical current.

The RP-7 is a mobile, battery-powered robot that can be controlled using a laptop computer. It is equipped with digital cameras and microphones, allowing the expert, nurse, and patient to communicate. The robot's "head" consists of a flat-screen monitor that displays the face of the expert operator.

The RP-7 also has an "arm" equipped with a touch-screen programmer, which the nurse can use to program the stimulator. The expert can "telestrate" to indicate to the nurse the correct buttons to push on the programming device.

Access to Specialists in the Next Room -- or Miles Away In the preliminary study, patients with neuromodulation devices were randomly assigned to conventional programming, with the expert in the room; or remote programming, with the expert using the RP-7 to guide a nurse in programming the stimulator. For the study, the expert operators were simply in another room of the same building. However, since the RP-7 operates over a conventional wireless connection, the expert can be anyplace that has Internet access.

On analysis of 20 patients (10 in each group), there was no significant difference in the accuracy or clinical outcomes of remote-presence versus conventional programming. No adverse events occurred with either type of session.

The remote-presence sessions took a little more time: 33 versus 26 minutes, on average. Patients, experts, and nonexpert nurses all gave high satisfaction scores for the programming experience.

"This study demonstrated that remote presence can be used for point-of-care programming of neuromodulation devices," Dr. Mendez and coauthors write. The study provides "proof of principle" that the RP-7 or similar devices can help to meet the need for experts needed to serve the rapidly expanding number of patients with neuromodulation therapies.

The researchers have also started a pilot study using a new mobile device, called the RP-Xpress. About the size of a small suitcase, the RP-Xpress is being used to perform long-distance home visits for patients living hundreds of miles away, using existing local cell phone networks. Dr. Mendez and colleagues conclude, "We envision a time, in the near future, when patients with implanted neuromodulation devices will have real-time access to an expert clinician from the comfort of their own home."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Wolters Kluwer Health: Lippincott Williams & Wilkins, via Newswise.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Ivar Mendez, Michael Song, Paula Chiasson, Luis Bustamante. Point-of-Care Programming for Neuromodulation. Neurosurgery, 2013; 72 (1): 99 DOI: 10.1227/NEU.0b013e318276b5b2

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Grammar undercuts security of long computer passwords

Jan. 24, 2013 — When writing or speaking, good grammar helps people make themselves be understood. But when used to concoct a long computer password, grammar -- good or bad -- provides crucial hints that can help someone crack that password, researchers at Carnegie Mellon University have demonstrated.

A team led by Ashwini Rao, a software engineering Ph.D. student in the Institute for Software Research, developed a password-cracking algorithm that took into account grammar and tested it against 1,434 passwords containing 16 or more characters. The grammar-aware cracker surpassed other state-of-the-art password crackers when passwords had grammatical structures, with 10 percent of the dataset cracked exclusively by the team's algorithm.

"We should not blindly rely on the number of words or characters in a password as a measure of its security," Rao concluded. She will present the findings on Feb. 20 at the Association for Computing Machinery's Conference on Data and Application Security and Privacy (CODASPY 2013) in San Antonio, Texas.

Basing a password on a phrase or short sentence makes it easier for a user to remember, but the grammatical structure dramatically narrows the possible combinations and sequences of words, she noted.

Likewise, grammar, whether good or bad, necessitates using different parts of speech -- nouns, verbs, adjectives, pronouns -- that also can undermine security. That's because pronouns are far fewer in number than verbs, verbs fewer than adjectives and adjectives fewer than nouns. So a password composed of "pronoun-verb-adjective-noun," such as "Shehave3cats" is inherently easier to decode than "Andyhave3cats," which follows "noun-verb-adjective-noun." A password that incorporated more nouns would be even more secure.

"I've seen password policies that say, 'Use five words,'" Rao said. "Well, if four of those words are pronouns, they don't add much security."

For instance, the team found that the five-word passphrase "Th3r3 can only b3 #1!" was easier to guess than the three-word passphrase "Hammered asinine requirements." Neither the number of words nor the number of characters determined password strength when grammar was involved. The researchers calculated that "My passw0rd is $uper str0ng!" is 100 times stronger as a passphrase than "Superman is $uper str0ng!," which in turn is 10,000 times stronger than "Th3r3 can only b3 #1!"

The research was an outgrowth of a class project for a masters-level course at CMU, Rao said. She and Gananand Kini, a fellow CMU graduate student, and Birendra Jha, a Ph.D. student at MIT, built their password cracker by building a dictionary for each part of speech and identifying a set of grammatical sequences, such as "determiner-adjective-noun" and "noun-verb-adjective-adverb," that might be used to generate passphrases.

Rao said the grammar-aware password cracker was intended only as a proof of concept and no attempt has been made to optimize its performance. But it is only a matter of time before someone does, she predicted.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Carnegie Mellon University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

A boost to your mobile signal

Jan. 25, 2013 — When using your mobile phone, it doesn't take much to lose that precious signal -- just turning a corner or riding on a train can be enough. EU-funded research is developing new technologies to eradicate those annoying 'black holes' in wireless coverage, while freeing up some mobile network capacity at the same time.

We live in a 24/7, always-on, mobile and wireless world. Wherever we go we are connected -- to each other, to the web, to all our favourite apps, to whatever data we need, exactly when we need it.

Or so we like to think. The reality is quite different. There are corners of our homes where the web won't work. There are black spots in towns and huge holes in the wireless network in more remote areas. Coverage is far from complete.

To compound the problem, even when they have a good signal, smartphones often struggle to download the data they need because the mobile networks are saturated. The airwaves are at full capacity.

Europe has always been at the forefront of innovation in telecommunications and a pioneer of the next generation of mobile technologies. So watch out for 'femtocells' -- small mobile telephony cells that improve both connectivity and coverage at a local level.

Better signals

The principle is quite simple. Instead of mobile operators having to invest millions in powerful long-range base stations to extend coverage over a wide area, they can move mobile connections to more localised small cells. A residential femtocell, for example, would improve coverage for just one house, or perhaps a block of flats. A commercial small cell might boost mobile connectivity for a whole office while a mobile femtocell could provide passengers on public transport with a strong and static signal (sparing their battery and eradicating sudden drops in signal).

Femtocells are far more than mobile booster stations; they can also help to divert data traffic off the mobile airwaves. This offloading creates more network capacity. Wired into the fixed line broadband network, a femtocell can reroute data and voice traffic down wires, freeing up the precious airwaves for even more traffic.

Fast forward

Significant research is still required to turn these practical ideas into reality. There are so many dots to join up. How do you prevent femtocell signals from interfering with signals to and from main base stations? How do you decide whether to route connections through fixed lines? What protocols should you use in the layers of the 'communications stack'?

The FP7 'Broadband-evolved Femto networks' ( Befemto) project unites several industry giants in mobile telecommunications equipment, mobile operators, small companies with key technologies and several technology R&D organisations to solve these issues and demonstrate prototype femtocells at work.

'Europe recognises that mobile connectivity is a powerful social and economic driver,' explains Dr Thierry Lestable, the project's coordinator. 'EU support for the development of cheap technologies to enhance and boost innovative services is really important for growth, not just growth for telecoms manufacturers and providers, but for the entire economy. Most businesses rely now on mobility and permanent connectivity.'

'By adding femtocells and small cells into the mobile network mix we make it possible for mobile operators to improve their spectrum efficiencies through heterogeneous networks (HetNets) and seamless integration of the fixed line telecoms network,' Dr Lestable continues. 'But this rerouting has to be optimised and intelligent. We have been developing and testing self-managed femtocell connections which are programmed to pick their wireless protocols and frequencies and route traffic depending on a whole host of contextual data.'

Befemto has developed advanced cooperation, self-organising, healing and switching algorithms. The built-in intelligence allows femtocells to optimise their use of radio frequencies (depending on traffic densities, for example) and fixed broadband networks. They can also communicate with macro-basestations without any interference or effect on macro signal quality or capacity.

'These new algorithms allow femtocell networks to work together to provide top quality coverage for users and support seamless, low-power and low-cost relief enhancement to the mobile service,' Dr Lestable remarks. 'We are focusing on the newly launched LTE or 4G networks because customers are paying a premium for these and will expect a true broadband experience: fast, reliable and unlimited access to everything everywhere. Femtocells, and small cells will allow operators to meet these expectations and lower their operational costs at the same time.'

Active all areas

The project partners have applied for an impressive 12 patents for the technologies developed within the project. These patents range from novel network monitoring software to mobile traffic optimisation algorithms. The project has also improved radio-frequency front-end technology to improve signal quality and reduce interference between femtocells and other wireless devices.

On the international stage Befemto has played an important role in proposing and supporting industry standards for femtocell protocols and the mechanisms for migrating data traffic between mobile, WiFi and fixed line architectures. The project partners have made a total of 27 direct contributions to 3GPP, the international standards organisation for mobile technology.

The partners have also run five international workshops worldwide and two training schools to share the project's findings and build a common understanding of these technologies within the community. The partners have published more than 70 international papers.

The Befemto technologies and system architectures have been tested in five pilot demonstrations. Trial results show that femtocells significantly reduce load on mobile networks while boosting signal strength and quality at a local level. The work of the project will support mobile operators to reach two major technical targets: high spectral efficiency (8 bits/s/Hz per cell) -- meaning more and better use of scarce airwaves -- and a maximum mean transmit power of 10 mW -- for lower levels of interference.

'Most importantly, our trials also prove to mobile network operators that the small cell model is a good one,' says Dr Lestable. 'We've looked at several different business models for their deployment; no matter which one you follow, femto- and small cells will save mobile operators money and help them to create value -- a sure way to get them to market.'

It looks like that dream of 24/7 fast connectivity could be just around the corner -- a corner that no longer gets in the way of your calls.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by European Commission, CORDIS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, January 11, 2013

Internet outages in the US doubled during Hurricane Sandy

Dec. 18, 2012 — USC scientists who track Internet outages throughout the world noted a spike in outages due to Hurricane Sandy, with almost twice as much of the Internet down in the U.S. as usual.

Previous research by this team has shown that on any given day, about 0.3 percent of the Internet is down for one reason or another. Just before Hurricane Sandy hit the East Coast, that number was around 0.2 percent in the U.S. (pretty good, by global standards) -- but once the storm made landfall, it jumped to 0.43 percent and took about four days to return to normal, according to a new report by scientists at the Information Sciences Institute (ISI) at the USC Viterbi School of Engineering.

"On a national scale, the amount of outage is small, showing how robust the Internet is. However, this significant increase in outages shows the large impact Sandy had on our national infrastructure," said John Heidemann, who led the team that tracked an analyzed the data. Heidemann is a research professor of computer science and project leader in the Computer Networks Division of ISI.

Heidemann worked with graduate student Lin Quan and research staff member Yuri Pradkin, both also from ISI, sending tiny packets of data known as "pings" to networks and waiting for "echoes," or responses. Though some networks -- those with a firewall -- will not respond to pings, this method has been shown to provide a statistically reasonable picture of when parts of the Internet are active or down.

The team was also able to pinpoint where the outages were occurring, and noted a spike in outages in New Jersey and New York after Sandy made landfall.

Their research was published as a technical report on the ISI webpage on December 17, and the raw data will be made available to other scientists who would like to analyze it.

The data is not yet specific enough to say exactly how many individuals were affected by the outage, but does provide solid information about the scale and location of outages, which could inform Internet service providers on how best to allocate resources to respond to natural disasters.

"Our work measures the virtual world to peer into the physical," said Heidemann. "We are working to improve the coverage of our techniques to provide a nearly real-time view of outages across the entire Internet. We hope that our approach can help first responders quickly understand the scope of evolving natural disasters."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Southern California, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, January 10, 2013

Study reveals impact of public DNS services; Researchers develop tool to help

Oct. 25, 2012 — A new study by Northwestern University researchers has revealed that public DNS services could actually slow down users' web-surfing experience. As a result, researchers have developed a solution to help avoid such an impact: a tool called namehelp that could speed web performance by 40 percent.

Through a large-scale study involving more than 10,000 hosts across nearly 100 countries, Fabián Bustamante, associate professor of electrical engineering and computer science at Northwestern's McCormick School of Engineering and Applied Science, and his team found that one cause of slow web performance is a growing trend toward public Domain Name Systems (DNS), a form of database that translates Internet domain and host names into Internet Protocol (IP) addresses.

DNS services play a vital role in the Internet: every time a user visits a website, chats with friends, or sends email, his computer performs DNS look-ups before setting up a connection. Complex web pages often require multiple DNS look-ups before they start loading, so users' computers may perform hundreds of DNS look-ups a day. Most users are unaware of DNS, since Internet Service Providers (ISP) typically offer the service transparently.

Over the last few years, companies such as Google, OpenDNS, and Norton DNS have begun offering "public" DNS services. While "private" DNS services, such as those offered by ISPs, may be misconfigured, respond slowly to queries, and go down more often, public DNS services offer increased security and privacy, and quicker resolution time. The arrangement is also beneficial for public DNS providers, who gain access to information about users' web habits.

Bustamante and his team found that while using public DNS services may provide many benefits, users' web performance can suffer due to the hidden interaction of DNS with Content Delivery Networks (CDNs), another useful and equally transparent service in the web.

CDNs help performance by offering exact replicas of website content in hundreds or thousands of computer servers around the world; when a user types in a web address, he is directed to the copy geographically closest to him. Most popular websites -- more than 70 percent of the top 1,000 most popular sites, according to the Northwestern study -- rely on CDNs to deliver their content quickly to users around the world.

But researchers found that using public DNS services can result in bad redirections, sending users to content from CDN replicas that are three times farther away than necessary.

Public DNS and CDN services are working to address the problem, but current users are left with two mediocre options -- bad web performance through public DNS services or bad security and privacy support through private DNS services.

Now Bustamante and his group have developed a tool called namehelp that may let users have their cake and eat it, too -- by using public DNS services without compromising on web performance.

namehelp runs personalized benchmarks in the background, from within users' computers, to determine their optimal DNS configuration and improve their web experience by helping sites load faster. If it finds that a user is receiving less than optimal web performance, namehelp automatically fixes it by cleverly interacting with DNS services and CDNs to ensure the user gets his content from the nearest possible copy.

You can download namehelp today from: http://aqualab.cs.northwestern.edu/projects/namehelp.

The paper describing the research is titled "Content Delivery and the Natural Evolution of DNS: Remote DNS Trends, Performance Issues and Alternative Solutions." The team's findings will be presented at the Internet Measurement Conference (IMC 2012) in Boston this November. In addition to Bustamante, authors on the paper are lead author John S. Otto, Mario A. Sanchez, and John P. Rula, all of Northwestern.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Northwestern University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, January 9, 2013

TV and the internet: a marriage made in entertainment heaven

Dec. 18, 2012 — If you have bought a new television lately, the chances are it is a lot smarter than your old one. Smart TVs, also known as connected or hybrid televisions, featuring integrated internet connectivity, currently account for around a third of TV sales in Europe. They are the end point in a huge and rapidly expanding value chain driven by the intensifying convergence of television and the internet.

Just as accessing the internet solely from a desktop PC is rapidly becoming a thing of the past, so too is broadcast TV in the traditional sense -- along with the complaint that 'there's nothing on television!' With connected TVs, channels become interactive, content can be shared, rated and commented among friends, videos can be streamed and watched at will, and a favourite programme will never be missed.

'Connected TV', in the words of Neelie Kroes, Vice-President of the European Commission responsible for the Digital Agenda, gives consumers, 'the potential to combine the best of what they get from existing media, with the best of what they can get from the new. To combine their favourite TV shows with their favourite games and social networks; material on demand, not on schedule, from the comfort of your sofa.'

And it is not just about the TV set in your living room. Increasingly both traditional broadcast TV and new multimedia content is accessible across a range of devices -- you can start watching a programme at home over coffee in the morning and seamlessly continue watching it on your smartphone on the commute to work.

For consumers it sounds like entertainment heaven, but making it happen is both a major opportunity and challenge for network operators, system developers and integrators, content providers and creators. Several EU-funded projects are addressing the challenges, from finding the best methods to deliver content to ensuring a seamless integration of all media for end users.

Bandwidth hunger: from HD to 3D

The Optiband (1) project, for example, is focusing on the delivery of high-definition (HD) and Video-on-Demand (VoD) via 'internet-protocol television' (IPTV) networks, which today typically use high-speed 'Digital subscriber line' (DSL) to deliver media content from the internet to end-users alongside more traditional voice and data services. By applying innovative algorithms to efficiently distribute content while preserving video quality, the Optiband researchers have been able to demonstrate the delivery of three HD video streams over a single 15Mbps DSL connection, allowing, in effect, three users to share one connection to watch different HD content with no loss in quality -- a big improvement on the current state of the art.

Optimising delivery methods is perhaps the most crucial factor for the widespread rollout of connected TV services today. Video content is bandwidth hungry: it already accounts for more than half of all data traversing the internet. And as HD content becomes more widespread, network saturation becomes a very real -- and alarming -- possibility. By 2016, it would take one person six million years to watch all the video content that will cross networks worldwide in a single month, according to some estimates. That requires a lot of bandwidth, but perhaps not as much as feared.

'The golden rule to remember is that all bandwidth available will be consumed,' says Jari Ahola, a project coordinator at the VTT Technical Research Centre of Finland. 'Just as bandwidth increases, the ways to consume it are increasing too: high-definition video is one example.'

So adding more bandwidth -- essentially laying more cables and other network infrastructure -- is not the only way to address the problem. Changing the way video is distributed would also help.

Instead of using the traditional unicast model, based on servers sending data to each client, Mr Ahola and a team of researchers working in the P2P-Next (2) project have shown that content can be distributed much more efficiently over a peer-to-peer (P2P) network in which data hops from one user to the next. By deploying a modified version of the P2P technology used for illegal file sharing, the P2P-Next team demonstrated a system for delivering video that uses at least 65 % less bandwidth compared to the unicast streaming approach.

'For network operators, P2P offers a big advantage in terms of bandwidth demands and cost,' the P2P-Next coordinator says.

More efficient delivery methods are important not just to keep pace with current trends, such as the more widespread distribution of HD content, but also future ones that are likely to be even more bandwidth intensive. After HD, 3D is set to become the new viewing revolution, and researchers working in the Romeo (3) project are attempting to ensure it gets to users with sufficient quality. Their approach is to combine a quality-aware P2P system with Digital Video Broadcasting (DVB) technology and innovative real-time compression methods to deliver 3D video content and spatial audio -- including live streams -- to multiple users on both fixed-line and mobile networks.

Still, network operators worry that, even with content optimisation and more efficient P2P delivery methods, user demands will lead to uncontrollable increases in traffic over time. The issue is being dealt with in the Napa-Wine (4) initiative, in which researchers in France, Italy, Hungary, Poland and the UK are carrying out an in-depth analysis of the impact a large deployment of P2P-TV services would have on the internet. Based on their work, they plan to provide recommendations for P2P-TV developers for best-in-class design of systems that minimise network load; also demonstrating low-cost changes that network operators can make to better exploit the available bandwidth for P2P traffic.

Two-way TV

For service providers and network operators, understanding what is going on over the network is crucial to ensure quality of service. Equally, content providers and creators want to know how their content is being received by their audience.

Because linked TV can be interactive and data is able to travel both ways there is a huge opportunity to mine viewer information, enabling more accurate market research for providers -- compared to relying on viewer feedback surveys -- and the possibility of much more personalised viewing experiences for end-users.

The recently launched Vista-TV (5) project is developing a system to extract, mine and analyse anonymised viewing data from connected TV users. The end result, the project team hopes, will be the creation of an entirely new SME-driven market in TV viewing-behaviour information.

'This is a revolutionary approach. Until now, the only measurements are taken by national organisations, and only a few thousand users at a time,' says Professor Abraham Bernstein, the project coordinator at the University of Zurich, Switzerland.

For end-users, however, the most revolutionary aspect of connected TV is the fact that it effectively puts them in control. You want more information on the subject of a documentary? A couple of clicks and it is on your screen, along with a list of other programmes you might be interested in watching via a video-on-demand service. You want to watch the football with your friends but don't feel like going out? Watch together, comment and interact via a social network. Just returned from holiday and want to share your photos and videos with family and friends? Upload them and create your own private channel from the comfort of your sofa.

A range of projects are working on the underlying technologies to make this integration of different media, delivery methods and viewing devices as seamless and transparent to the end user as possible.

Getting social

In the HBB-Next (6) initiative, researchers are developing user-centric technologies for enriching the TV-viewing experience with social networking, multiple device access and group-tailored content recommendations, as well as the seamless mixing of broadcast content, complementary internet content and user-generated content. In NoTube (7), a team from nine countries has focused on using semantic technologies to annotate content so computers can understand the meaning of what someone is watching, which, combined with data on viewing habits and social networking activities, enables highly personalised, intelligent services. And in Comet (8), researchers are focusing primarily on user-generated content, developing an architecture for content-aware networks to make it much easier to locate, access and distribute videos.

Meanwhile, in LinkedTV (9), a team from eight European countries are going one step further, putting cloud computing firmly at the centre of the TV-internet convergence mix. By weaving content together to deliver a single, integrated and interactive experience, the researchers are building an online cloud of networked audio-visual content that will be accessible regardless of place, device or source. Their goal is to provide an interactive, user-controlled TV-like experience -- whether the content is being watched on a TV set, smartphone, tablet or personal computing device.

'Browsing TV and web content should be so smooth and interrelated that in the end even "surfing the web" or "watching TV" will become a meaningless distinction,' the LinkedTV team says.

The projects featured in this article have been supported by the Seventh Framework Programme (FP7) for research.

(1) Optiband: Optimization of Bandwidth for IPTV video streaming (2) P2P-Next: Next generation peer-to-peer content delivery platform (3) Romeo: Remote Collaborative Real-Time Multimedia Experience over the Future Internet (4) Napa-Wine: Network-Aware P2P-TV Application over Wise Networks (5) Vista-TV: Linked Open Data, Statistics and Recommendations for Live TV (6) HBB-Next: Next Generation Hybrid Media (7) NoTube: Networks and Ontologies for the Transformation and Unification of Broadcasting and the Internet (8) Comet: COntent Mediator architecture for content-aware nETworks (9) LinkedTV: Television linked to the Web

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by European Commission, CORDIS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, January 8, 2013

Best of both worlds: Hybrid approach sheds light on crystal structure solution

Dec. 11, 2012 — Understanding the arrangement of atoms in a solid -- one of solids' fundamental properties -- is vital to advanced materials research. For decades, two camps of researchers have been working to develop methods to understand these so-called crystal structures. "Solution" methods, championed by experimental researchers, draw on data from diffraction experiments, while "prediction" methods of computational materials scientists bypass experimental data altogether.

While progress has been made, computational scientists still cannot make crystal structure predictions routinely. Now, drawing on both prediction and solution methods, Northwestern University researchers have developed a new code to solve crystal structures automatically and in cases where traditional experimental methods struggle.

Key to the research was integrating evidence about solids' symmetry -- the symmetrical arrangement of atoms within the crystal structure -- into a promising computational model.

"We took the best of both worlds," said Chris Wolverton, professor of materials science and engineering at Northwestern's McCormick School of Engineering and expert in computational materials science. "Computational materials scientists had developed a great optimization algorithm, but it failed to take into account some important facts gathered by experimentalists. By simply integrating that information into the algorithm, we can have a much fuller understanding of crystal structures."

The resulting algorithm could allow researchers to understand the structures of new compounds for applications ranging from hydrogen storage to lithium-ion batteries.

A paper describing the research was published November 25 in the journal Nature Materials.

While both computational and experimental researchers have made strides in determining the crystal structure of materials, their efforts have some limitations. Diffraction experiments are labor-intensive and have high potential for human error, while most existing computational approaches neglect potentially valuable experimental input.

When computational and experimental research is combined, however, those limitations can be overcome, the researchers found.

In their research, the Northwestern authors seized onto an important fact: that while the precise atomic arrangements for a given solid may be unknown, experiments have revealed the symmetries present in tens of thousands of known compounds. This database of information is useful in solving the structures of new compounds.

The researchers were able to revise a useful model -- known as the genetic algorithm, which mimics the process of biological evolution -- to take those data into account.

In the paper, the researchers used this technique to analyze the atomic structure of four technologically relevant solids whose crystal structure has been debated by scholars -- magnesium imide, ammonia borane, lithium peroxide, and high-pressure silane -- and demonstrated how their method would solve their atomic structures.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Northwestern University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Bryce Meredig, C. Wolverton. A hybrid computational–experimental approach for automated crystal structure solution. Nature Materials, 2012; DOI: 10.1038/nmat3490

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, January 7, 2013

Quantum computing with recycled particles

Oct. 23, 2012 — A research team from the University of Bristol's Centre for Quantum Photonics (CQP) have brought the reality of a quantum computer one step closer by experimentally demonstrating a technique for significantly reducing the physical resources required for quantum factoring.

The team have shown how it is possible to recycle the particles inside a quantum computer, so that quantum factoring can be achieved with only one third of the particles originally required. The research is published in the latest issue of Nature Photonics.

Using photons as the particles, the Bristol team constructed a quantum optical circuit that recycled one of the photons to set a new record for factoring 21 with a quantum algorithm -- all previous demonstrations have factored 15.

Dr Anthony Laing, who led the project, said: "Quantum computers promise to harness the counterintuitive laws of quantum mechanics to perform calculations that are forever out of reach of conventional classical computers. Realising such a device is one of the great technological challenges of the century."

While scientists and mathematicians are still trying to understand the full range of capabilities of quantum computers, the current driving application is the hard problem of factoring large numbers. The best classical computers can run for the lifetime of the universe, searching for the factors of a large number, yet still be unsuccessful.

In fact, Internet cryptographic protocols are based on this exponential overhead in computational time: if a third party wants to spy on your emails, they will need to solve a hard factoring problem first. A quantum computer, on the other hand, is capable of efficiently factoring large numbers, but the physical resources required mean that constructing such a device is highly challenging.

CQP PhD student Enrique Martín-López, who performed the experiment, said: "While it will clearly be some time before emails can be hacked with a quantum computer, this proof of principle experiment paves the way for larger implementations of quantum algorithms by using particle recycling."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Bristol.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Enrique Martín-López, Anthony Laing, Thomas Lawson, Roberto Alvarez, Xiao-Qi Zhou, Jeremy L. O'Brien. Experimental realization of Shor's quantum factoring algorithm using qubit recycling. Nature Photonics, 2012; DOI: 10.1038/nphoton.2012.259

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, January 5, 2013

Footwear forensics: CSI needs to tread carefully

Oct. 26, 2012 — A new computer algorithm can analyze the footwear marks left at a crime scene according to clusters of footwear types, makes and tread patterns even if the imprint recorded by crime scene investigators is distorted or only a partial print.

Footwear marks are found at crime scenes much more commonly than fingerprints, writes a team from the University at Buffalo, New York, in a forthcoming issue of the International Journal of Granular Computing, Rough Sets and Intelligent Systems. They point out that while footprints are common they are often left unused by forensic scientists because marks may be distorted, only a partial print may be left and because of the vast number of shoe shapes and sizes. However, matching a footprint at a crime scene can quickly narrow the number of suspects and can tie different crime scenes to the same perpetrator even if other evidence is lacking.

The team, Yi Tang, Harish Kasiviswanathan and Sargur Srihari, has developed a way to group recurring patterns in a database of footwear marks to that the clustered data can be searched and compared to suspect prints much more quickly than by other techniques whether manual or computer-based. The team explains that geometric shapes including line segments, circles and ellipses can be the focus and allow the footwear to be quickly identified using an "attributed relational graph" or ARG. The attributes for every shape are defined in a way to provide scaling, rotation and translation invariance, the researchers explain. The team adds that the introduction of a measure of how different two marks might be, which they refer to as the footwear print distance (FPD) allows them to home in on a particular boot or shoe even if the recorded print is noisy or degraded perhaps by perpetrator retracing their steps or other marks present at the scene.

The researchers have successfully tested their approach against the currently used footwear print retrieval systems used in forensic science. "In experimental runs our system has significantly higher accuracy than state-of-the-art footwear print retrieval systems," Tang says.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience Publishers, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Yi Tang, Harish Kasiviswanathan, Sargur N. Srihari. An efficient clustering-based retrieval framework for real crime scene footwear marks. International Journal of Granular Computing, Rough Sets and Intelligent Systems, 2012; 2 (4): 327 DOI: 10.1504/IJGCRSIS.2012.049981

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Predicting what topics will trend on Twitter: Algorithm offers new technique for analyzing data that fluctuate over time

Nov. 1, 2012 — Twitter's home page features a regularly updated list of topics that are "trending," meaning that tweets about them have suddenly exploded in volume. A position on the list is highly coveted as a source of free publicity, but the selection of topics is automatic, based on a proprietary algorithm that factors in both the number of tweets and recent increases in that number.

At the Interdisciplinary Workshop on Information and Decision in Social Networks at MIT in November, Associate Professor Devavrat Shah and his student, Stanislav Nikolov, will present a new algorithm that can, with 95 percent accuracy, predict which topics will trend an average of an hour and a half before Twitter's algorithm puts them on the list -- and sometimes as much as four or five hours before.

The algorithm could be of great interest to Twitter, which could charge a premium for ads linked to popular topics, but it also represents a new approach to statistical analysis that could, in theory, apply to any quantity that varies over time: the duration of a bus ride, ticket sales for films, maybe even stock prices.

Like all machine-learning algorithms, Shah and Nikolov's needs to be "trained": it combs through data in a sample set -- in this case, data about topics that previously did and did not trend -- and tries to find meaningful patterns. What distinguishes it is that it's nonparametric, meaning that it makes no assumptions about the shape of patterns.

Let the data decide

In the standard approach to machine learning, Shah explains, researchers would posit a "model" -- a general hypothesis about the shape of the pattern whose specifics need to be inferred. "You'd say, 'Series of trending things … remain small for some time and then there is a step,'" says Shah, the Jamieson Career Development Associate Professor in the Department of Electrical Engineering and Computer Science. "This is a very simplistic model. Now, based on the data, you try to train for when the jump happens, and how much of a jump happens.

"The problem with this is, I don't know that things that trend have a step function," Shah explains. "There are a thousand things that could happen." So instead, he says, he and Nikolov "just let the data decide."

In particular, their algorithm compares changes over time in the number of tweets about each new topic to the changes over time of every sample in the training set. Samples whose statistics resemble those of the new topic are given more weight in predicting whether the new topic will trend or not. In effect, Shah explains, each sample "votes" on whether the new topic will trend, but some samples' votes count more than others'. The weighted votes are then combined, giving a probabilistic estimate of the likelihood that the new topic will trend.

In Shah and Nikolov's experiments, the training set consisted of data on 200 Twitter topics that did trend and 200 that didn't. In real time, they set their algorithm loose on live tweets, predicting trending with 95 percent accuracy and a 4 percent false-positive rate.

Shah predicts, however, that the system's accuracy will improve as the size of the training set increases. "The training sets are very small," he says, "but we still get strong results."

Keeping pace

Of course, the larger the training set, the greater the computational cost of executing Shah and Nikolov's algorithm. Indeed, Shah says, curbing computational complexity is the reason that machine-learning algorithms typically employ parametric models in the first place. "Our computation scales proportionately with the data," Shah says.

But on the Web, he adds, computational resources scale with the data, too: As Facebook or Google add customers, they also add servers. So his and Nikolov's algorithm is designed so that its execution can be split up among separate machines. "It is perfectly suited to the modern computational framework," Shah says.

In principle, Shah says, the new algorithm could be applied to any sequence of measurements performed at regular intervals. But the correlation between historical data and future events may not always be as clear cut as in the case of Twitter posts. Filtering out all the noise in the historical data might require such enormous training sets that the problem becomes computationally intractable even for a massively distributed program. But if the right subset of training data can be identified, Shah says, "It will work."

"People go to social-media sites to find out what's happening now," says Ashish Goel, an associate professor of management science at Stanford University and a member of Twitter's technical advisory board. "So in that sense, speeding up the process is something that is very useful." Of the MIT researchers' nonparametric approach, Goel says, "it's very creative to use the data itself to find out what trends look like. It's quite creative and quite timely and hopefully quite useful."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, January 4, 2013

Biology-friendly robot programming language: Training your robot the PaR-PaR way

Oct. 23, 2012 — Teaching a robot a new trick is a challenge. You can't reward it with treats and it doesn't respond to approval or disappointment in your voice. For researchers in the biological sciences, however, the future training of robots has been made much easier thanks to a new program called "PaR-PaR."

Nathan Hillson, a biochemist at the U.S. Department of Energy (DOE)'s Joint BioEnergy Institute (JBEI), led the development of PaR-PaR, which stands for Programming a Robot. PaR-PaR is a simple high-level, biology-friendly, robot-programming language that allows researchers to make better use of liquid-handling robots and thereby make possible experiments that otherwise might not have been considered.

"The syntax and compiler for PaR-PaR are based on computer science principles and a deep understanding of biological workflows," Hillson says. "After minimal training, a biologist should be able to independently write complicated protocols for a robot within an hour. With the adoption of PaR-PaR as a standard cross-platform language, hand-written or software-generated robotic protocols could easily be shared across laboratories."

Hillson, who directs JBEI's Synthetic Biology program and also holds an appointment with the Lawrence Berkeley National Laboratory (Berkeley Lab)'s Physical Biosciences Division, is the corresponding author of a paper describing PaR-PaR that appears in the American Chemical Society journal Synthetic Biology. The paper is titled "PaR-PaR Laboratory Automation Platform." Co-authors are Gregory Linshiz, Nina Stawski, Sean Poust, Changhao Bi and Jay Keasling.

Using robots to perform labor-intensive multi-step biological tasks, such as the construction and cloning of DNA molecules, can increase research productivity and lower costs by reducing experimental error rates and providing more reliable and reproducible experimental data. To date, however, automation companies have targeted the highly-repetitive industrial laboratory operations market while largely ignoring the development of flexible easy-to-use programming tools for dynamic non-repetitive research environments. As a consequence, researchers in the biological sciences have had to depend upon professional programmers or vendor-supplied graphical user interfaces with limited capabilities.

"Our vision was for a single protocol to be executable across different robotic platforms in different laboratories, just as a single computer software program is executable across multiple brands of computer hardware," Hillson says. "We also wanted robotics to be accessible to biologists, not just to robot specialist programmers, and for a laboratory that has a particular brand of robot to benefit from a wide variety of software and protocols."

Hillson, who earlier led the development of a unique software program called "j5" for identifying cost-effective DNA construction strategies, says that beyond enabling biologists to manually instruct robots in a time-effective manner, PaR-PaR can also amplify the utility of biological design automation software tools such as j5.

"Before PaR-PaR, j5 only outputted protocols for one single robot platform," Hillson says. "After PaR-PaR, the same protocol can now be executed on many different robot platforms."

The PaR-PaR language uses an object-oriented approach that represents physical laboratory objects -- including reagents, plastic consumables and laboratory devices -- as virtual objects. Each object has associated properties, such as a name and a physical location, and multiple objects can be grouped together to create a new composite object with its own properties.

Actions can be performed on objects and sequences of actions can be consolidated into procedures that in turn are issued as PaR-PaR commands. Collections of procedural definitions can be imported into PaR-PaR via external modules.

"A researcher, perhaps in conjunction with biological design automation software such as j5, composes a PaR-PaR script that is parsed and sent to a database," Hillson says. "The operational flow of the commands are optimized and adapted to the configuration of a specific robotic platform. Commands are then translated from the PaR-PaR meta-language into the robotic scripting language for execution."

Hillson and his colleagues have developed PaR-PaR as open-source software freely available through its web interface on the public PaR-PaR webserver http://parpar.jbei.org.

"Flexible and biology-friendly operation of robotic equipment is key to its successful integration in biological laboratories, and the efforts required to operate a robot must be much smaller than the alternative manual lab work," Hillson says. "PaR-PaR accomplishes all of these objectives and is intended to benefit a broad segment of the biological research community, including non-profits, government agencies and commercial companies."

This work was primarily supported by the DOE Office of Science.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Lawrence Berkeley National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Gregory Linshiz, Nina Stawski, Sean Poust, Changhao Bi, Jay D. Keasling, Nathan J. Hillson. PaR-PaR Laboratory Automation Platform. ACS Synthetic Biology, 2012; : 121009112212000 DOI: 10.1021/sb300075t

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, January 3, 2013

Researchers identify ways to exploit 'cloud browsers' for large-scale, anonymous computing

Nov. 28, 2012 — Researchers from North Carolina State University and the University of Oregon have found a way to exploit cloud-based Web browsers, using them to perform large-scale computing tasks anonymously. The finding has potential ramifications for the security of "cloud browser" services.

At issue are cloud browsers, which create a Web interface in the cloud so that computing is done there rather than on a user's machine. This is particularly useful for mobile devices, such as smartphones, which have limited computing power.The cloud-computing paradigm pools the computational power and storage of multiple computers, allowing shared resources for multiple users.

"Think of a cloud browser as being just like the browser on your desktop computer, but working entirely in the cloud and providing only the resulting image to your screen," says Dr. William Enck, an assistant professor of computer science at NC State and co-author of a paper describing the research.

Because these cloud browsers are designed to perform complex functions, the researchers wanted to see if they could be used to perform a series of large-scale computations that had nothing to do with browsing. Specifically, the researchers wanted to determine if they could perform those functions using the "MapReduce" technique developed by Google, which facilitates coordinated computation involving parallel efforts by multiple machines.

The research team knew that coordinating any new series of computations would entail passing large packets of data between different nodes, or cloud browsers. To address this challenge, researchers stored data packets on bit.ly and other URL-shortening sites, and then passed the resulting "links" between various nodes.

Using this technique, the researchers were able to perform standard computation functions using data packets that were 1, 10 and 100 megabytes in size. "It could have been much larger," Enck says, "but we did not want to be an undue burden on any of the free services we were using."

"We've shown that this can be done," Enck adds. "And one of the broader ramifications of this is that it could be done anonymously. For instance, a third party could easily abuse these systems, taking the free computational power and using it to crack passwords."

However, Enck says cloud browsers can protect themselves to some extent by requiring users to create accounts -- and then putting limits on how those accounts are used. This would make it easier to detect potential problems.

The paper, "Abusing Cloud-Based Browsers for Fun and Profit," will be presented Dec. 6 at the 2012 Annual Computer Security Applications Conference in Orlando, Fla. The paper was co-authored by Vasant Tendulkar and Ashwin Shashidharan, graduate students at NC State, and Joe Pletcher, Ryan Snyder and Dr. Kevin Butler, of the University of Oregon. The research was supported by the National Science Foundation and the U.S. Army Research Office.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by North Carolina State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, January 2, 2013

Eating or spending too much? Blame it on social networking sites

Dec. 11, 2012 — Participating in online social networks can have a detrimental effect on consumer well-being by lowering self-control among certain users, according to a new study in the Journal of Consumer Research.

"Using online social networks can have a positive effect on self-esteem and well-being. However, these increased feelings of self-worth can have a detrimental effect on behavior. Because consumers care about the image they present to close friends, social network use enhances self-esteem in users who are focused on close friends while browsing their social network. This momentary increase in self-esteem leads them to display less self-control after browsing a social network," write authors Keith Wilcox (Columbia University) and Andrew T. Stephen (University of Pittsburgh).

Online social networks are having a fundamental impact on society. Facebook, the largest, has over one billion active users. Does using a social network impact the choices consumers make in their daily lives? If so, what effect does it have on consumer well-being?

A series of interesting studies showed that Facebook usage lowers self-control for consumers who focus on close friends while browsing their social network. Specifically, consumers focused on close friends are more likely to choose an unhealthy snack after browsing Facebook due to enhanced self-esteem. Greater Facebook use was associated with a higher body-mass index, increased binge eating, a lower credit score, and higher levels of credit card debt for consumers with many close friends in their social network.

"These results are concerning given the increased time people spend using social networks, as well as the worldwide proliferation of access to social networks anywhere anytime via smartphones and other gadgets. Given that self-control is important for maintaining social order and personal well-being, this subtle effect could have widespread impact. This is particularly true for adolescents and young adults who are the heaviest users of social networks and have grown up using social networks as a normal part of their daily lives," the authors conclude.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Chicago Press Journals.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Keith Wilcox, Andrew T. Stephen. Are Close Friends the Enemy? Online Social Networks, Self-Esteem, and Self-Control. Journal of Consumer Research, 2012; : 000 DOI: 10.1086/668794

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Leap forward in brain-controlled computer cursors: New algorithm greatly improves speed and accuracy

Nov. 18, 2012 — Stanford researchers have designed the fastest, most accurate algorithm yet for brain-implantable prosthetic systems that can help disabled people maneuver computer cursors with their thoughts. The algorithm's speed, accuracy and natural movement approach those of a real arm, doubling performance of existing algorithms.

When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement still activate as if trying to make the immobile limb work again. Despite neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.

In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons, and after passing those signals through a mathematical decode algorithm, can use them to control computer cursors with thoughts. The work is part of a field known as neural prosthetics.

A team of Stanford researchers have now developed an algorithm, known as ReFIT, that vastly improves the speed and accuracy of neural prosthetics that control computer cursors. The results are to be published Nov. 18 in the journal Nature Neuroscience in a paper by Krishna Shenoy, a professor of electrical engineering, bioengineering and neurobiology at Stanford, and a team led by research associate Dr. Vikash Gilja and bioengineering doctoral candidate Paul Nuyujukian.

In side-by-side demonstrations with rhesus monkeys, cursors controlled by the ReFIT algorithm doubled the performance of existing systems and approached performance of the real arm. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time.

"These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford," said Shenoy.

Sensing mental movement in real time

The system relies on a silicon chip implanted into the brain, which records "action potentials" in neural activity from an array of electrode sensors and sends data to a computer. The frequency with which action potentials are generated provides the computer key information about the direction and speed of the user's intended movement.

The ReFIT algorithm that decodes these signals represents a departure from earlier models. In most neural prosthetics research, scientists have recorded brain activity while the subject moves or imagines moving an arm, analyzing the data after the fact. "Quite a bit of the work in neural prosthetics has focused on this sort of offline reconstruction," said Gilja, the first author of the paper.

The Stanford team wanted to understand how the system worked "online," under closed-loop control conditions in which the computer analyzes and implements visual feedback gathered in real time as the monkey neurally controls the cursor to toward an onscreen target.

The system is able to make adjustments on the fly when while guiding the cursor to a target, just as a hand and eye would work in tandem to move a mouse-cursor onto an icon on a computer desktop. If the cursor were straying too far to the left, for instance, the user likely adjusts their imagined movements to redirect the cursor to the right. The team designed the system to learn from the user's corrective movements, allowing the cursor to move more precisely than it could in earlier prosthetics.

To test the new system, the team gave monkeys the task of mentally directing a cursor to a target -- an onscreen dot -- and holding the cursor there for half a second. ReFIT performed vastly better than previous technology in terms of both speed and accuracy. The path of the cursor from the starting point to the target was straighter and it reached the target twice as quickly as earlier systems, achieving 75 to 85 percent of the speed of real arms.

"This paper reports very exciting innovations in closed-loop decoding for brain-machine interfaces. These innovations should lead to a significant boost in the control of neuroprosthetic devices and increase the clinical viability of this technology," said Jose Carmena, associate professor of electrical engineering and neuroscience at the University of California Berkeley.

A smarter algorithm

Critical to ReFIT's time-to-target improvement was its superior ability to stop the cursor. While the old model's cursor reached the target almost as fast as ReFIT, it often overshot the destination, requiring additional time and multiple passes to hold the target.

The key to this efficiency was in the step-by-step calculation that transforms electrical signals from the brain into movements of the cursor onscreen. The team had a unique way of "training" the algorithm about movement. When the monkey used his real arm to move the cursor, the computer used signals from the implant to match the arm movements with neural activity. Next, the monkey simply thought about moving the cursor, and the computer translated that neural activity into onscreen movement of the cursor. The team then used the monkey's brain activity to refine their algorithm, increasing its accuracy.

The team introduced a second innovation in the way ReFIT encodes information about the position and velocity of the cursor. Gilja said that previous algorithms could interpret neural signals about either the cursor's position or its velocity, but not both at once. ReFIT can do both, resulting in faster, cleaner movements of the cursor

An engineering eye

Early research in neural prosthetics had the goal of understanding the brain and its systems more thoroughly, Gilja said, but he and his team wanted to build on this approach by taking a more pragmatic engineering perspective. "The core engineering goal is to achieve highest possible performance and robustness for a potential clinical device, " he said.

To create such a responsive system, the team decided to abandon one of the traditional methods in neural prosthetics. Much of the existing research in this field has focused on differentiating among individual neurons in the brain. Importantly, such a detailed approach has allowed neuroscientists to create a detailed understanding of the individual neurons that control arm movement.

The individual neuron approach has its drawbacks, Gilja said. "From an engineering perspective, the process of isolating single neurons is difficult, due to minute physical movements between the electrode and nearby neurons, making it error-prone," he said. ReFIT focuses on small groups of neurons instead of single neurons.

By abandoning the single-neuron approach, the team also reaped a surprising benefit: performance longevity. Neural implant systems that are fine-tuned to specific neurons degrade over time. It is a common belief in the field that after six months to a year, they can no longer accurately interpret the brain's intended movement. Gilja said the Stanford system is working very well more than four years later.

"Despite great progress in brain-computer interfaces to control the movement of devices such as prosthetic limbs, we've been left so far with halting, jerky, Etch-a-Sketch-like movements. Dr. Shenoy's study is a big step toward clinically useful brain-machine technology that have faster, smoother, more natural movements," said James Gnadt, PhD, a program director in Systems and Cognitive Neuroscience at the National Institute of Neurological Disorders and Stroke, part of the National Institutes of Health.

For the time being, the team has been focused on improving cursor movement rather than the creation of robotic limbs, but that is not out of the question, Gilja said. Near term, precise, accurate control of a cursor is a simplified task with enormous value for paralyzed people.

"We think we have a good chance of giving them something very useful," he said. The team is now translating these innovations to paralyzed people as part of a clinical trial.

This research was funded by the Christopher and Dana Reeve Paralysis Foundation; NSF, NDSEG, and SGF Graduate Fellowships; DARPA ("Revolutionizing Prosthetics" and "REPAIR"); and NIH (NINDS-CRCNS and Director's Pioneer Award).

Other contributing researchers include Cynthia Chestek, John Cunningham, and Byron Yu, Joline Fan, Mark Churchland, Matthew Kaufman, Jonathan Kao, and Stephen Ryu.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Stanford School of Engineering. The original article was written by Kelly Servick, science-writing intern.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Vikash Gilja, Paul Nuyujukian, Cindy A Chestek, John P Cunningham, Byron M Yu, Joline M Fan, Mark M Churchland, Matthew T Kaufman, Jonathan C Kao, Stephen I Ryu, Krishna V Shenoy. A high-performance neural prosthesis enabled by control algorithm design. Nature Neuroscience, 2012; DOI: 10.1038/nn.3265

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here