Google Search

Monday, February 4, 2013

Computer scientists develop new way to study molecular networks

Jan. 24, 2013 — In biology, molecules can have multi-way interactions within cells, and until recently, computational analysis of these links has been "incomplete," according to T. M. Murali, associate professor of computer science in the College of Engineering at Virginia Tech.

His group authored an article on their new approach to address these shortcomings, titled "Reverse Engineering Molecular Hypergraphs," that received the Best Paper Award at the recent 2012 ACM Conference on Bioinformatics, Computational Biology and Biomedicine.

Intricate networks of connections among molecules control the processes that occur within cells. The "analysis of these interaction networks has relied almost entirely on graphs for modeling the information. Since a link in a graph connects at most two molecules (e.g., genes or proteins), such edges cannot accurately represent interactions among multiple molecules. These interactions occur very often within cells," the computer scientists wrote in their paper.

To overcome the limitations in the use of the graphs, Murali and his students used hypergraphs, a generalization of a graph in which an hyperedge can connect multiple molecules.

"We used hypergraphs to capture the uncertainty that is inherent in reverse engineering gene to gene networks from systems biology datasets," explained Ahsanur Rahman, the lead author on the paper. "We believe hypergraphs are powerful representations for capturing the uncertainty in a network's structure."

They developed reliable algorithms that can discover hyperedges supported by sets of networks. In ongoing research, the scientists seek to use hyperedges to suggest new experiments. By capturing uncertainty in network structure, hyperedges can directly suggest groups of genes for which further experiments may be required in order to precisely discover interaction patterns. Incorporating the data from these experiments might help to refine hyperedges and resolve the interactions among molecules, resulting in fruitful interplay and feedback between computation and experiment.

Murali, and his students Ahsanur Rahman and Christopher L. Poirel, both doctoral candidates, and David L. Badger, a software engineer in Murali's group, all of Blacksburg, Va., and all in the computer science department, used funding from the National Institutes of Health and the National Science Foundation to better understand this uncertainty in these various forms of interactions.

Murali is also the co-director of the Institute for Critical Technology and Applied Science's Center for Systems Biology of Engineered Tissues and the associate program director for the computational tissue engineering interdisciplinary graduate education program at Virginia Tech.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Virginia Tech, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, February 3, 2013

Keeping to your New Year resolutions with PiFace

Jan. 8, 2013 — After a festive period of excess, a January diet is one of the most common New Year resolutions for many people.

Sticking to it, however, is harder, with temptation around every corner and inside every cupboard.

Now University of Manchester scientists have come up with a unique deterrent -- a talking, tweeting chicken guarding your cupboards to shame hungry dieters into abstaining.

The chicken, which not only barks out orders to sneaky snackers, but even tweets that person's Twitter account to publicly shame them if they stray, uses a Raspberry Pi -- a tiny, single-board computer.

Raspberry Pi and PiFace, an add-on which powers real-life applications, are the simplest and most user-friendly ways for computers to interface with the world, and are being used by University of Manchester scientists to inspire the next generation of computer whizzkids.

The credit-card sized computers have a vast range of potential applications that will get young people fired up for computing. As well as the cupboard watcher, The University has helped youngsters make birdboxes that tweet and photograph birds, control Scalextric cars and build interactive toys that react to the weather. London Zoo is also interested in collaborating to record animal movements.

The academics, from the University's School of Computer Science, have run a series of workshops for schoolteachers aiming to transform the teaching of computing in schools.

Raspberry Pi and PiFace put the fun back into computing and academics hope to be a major influence on changing the way the schools and society view the subject.

Academics have found that schoolchildren coming into University have a much lower level of technical knowledge than in previous years. The Government is keen on improving how computing is taught in schools using a scientific approach to the subject.

Raspberry Pi was developed by the Raspberry Pi Foundation with the aim of improving teaching of basic computer science in schools.

PiFace devices sit on top of the Raspberry Pi to control the real world -- powering motors, controlling robots, triggering cameras and using sensor networks. With the Pi they have all the capabilities of a computer but are more flexible and can be embedded in the real world -- costing as little as £40 per kit.

Dr David Rydeheard, from the School of Computer Science, said: "This is an exciting development, taking computing out of its box and allowing schoolchildren to play with the science of computing.

"Schools have physics, chemistry and biology laboratories to teach these subjects. The combination of Raspberry Pi and PiFace creates a cheap personal laboratory for computer science that every child can own.

"The future wealth of our country depends crucially on our expertise in the science and technology of computing. At the moment schools fail to teach their students computing: how to design and build computing systems. Raspberry Pi and PiFace are ideal for schools to use to teach this key subject."

Workshops for teachers using Raspberry Pi and PiFace have attracted more than 50 teachers from schools in the North West per session, and more recently The University has run workshops with children.

Dr Andrew Robinson was amazed by the response of children. He said: "It really fired their imagination.

"After seeing what Raspberry Pi and PiFace could do we had suggestions including an automated insulin monitor that can dial 999, and another that automatically reorders food when it detects the cupboard is bare.

"One child even came up with a design for a device that politely reminds you to put the toilet seat down after use. I was really blown away with what they came up with."

The team are also launching the Raspberry Pi Bake Off, an international competition for schools and hobbyists, challenging entrants to create useful gadgets to change the world using Raspberry Pi computers.

A video of children using PiFace with the Scalextric and birdbox during the Manchester Science Festival can be seen at http://www.youtube.com/watch?v=SjME3WU7ao0

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Manchester University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Researchers make DNA data storage a reality: Every film and TV program ever created -- in a teacup

Jan. 23, 2013 — Researchers at the EMBL-European Bioinformatics Institute (EMBL-EBI) have created a way to store data in the form of DNA – a material that lasts for tens of thousands of years. The new method, published January 23 in the journal Nature, makes it possible to store at least 100 million hours of high-definition video in about a cup of DNA.

There is a lot of digital information in the world – about three zettabytes’ worth (that’s 3000 billion billion bytes) – and the constant influx of new digital content poses a real challenge for archivists. Hard disks are expensive and require a constant supply of electricity, while even the best ‘no-power’ archiving materials such as magnetic tape degrade within a decade. This is a growing problem in the life sciences, where massive volumes of data – including DNA sequences – make up the fabric of the scientific record.

"We already know that DNA is a robust way to store information because we can extract it from bones of woolly mammoths, which date back tens of thousands of years, and make sense of it,” explains Nick Goldman of EMBL-EBI. “It’s also incredibly small, dense and does not need any power for storage, so shipping and keeping it is easy.”

Reading DNA is fairly straightforward, but writing it has until now been a major hurdle to making DNA storage a reality. There are two challenges: first, using current methods it is only possible to manufacture DNA in short strings. Secondly, both writing and reading DNA are prone to errors, particularly when the same DNA letter is repeated. Nick Goldman and co-author Ewan Birney, Associate Director of EMBL-EBI, set out to create a code that overcomes both problems.

“We knew we needed to make a code using only short strings of DNA, and to do it in such a way that creating a run of the same letter would be impossible. So we figured, let’s break up the code into lots of overlapping fragments going in both directions, with indexing information showing where each fragment belongs in the overall code, and make a coding scheme that doesn't allow repeats. That way, you would have to have the same error on four different fragments for it to fail – and that would be very rare," says Ewan Birney.

The new method requires synthesising DNA from the encoded information: enter Agilent Technologies, Inc, a California-based company that volunteered its services. Ewan Birney and Nick Goldman sent them encoded versions of: an .mp3 of Martin Luther King’s speech, “I Have a Dream”; a .jpg photo of EMBL-EBI; a .pdf of Watson and Crick’s seminal paper, “Molecular structure of nucleic acids”; a .txt file of all of Shakespeare's sonnets; and a file that describes the encoding.

“We downloaded the files from the Web and used them to synthesise hundreds of thousands of pieces of DNA – the result looks like a tiny piece of dust,” explains Emily Leproust of Agilent. Agilent mailed the sample to EMBL-EBI, where the researchers were able to sequence the DNA and decode the files without errors.

“We’ve created a code that's error tolerant using a molecular form we know will last in the right conditions for 10 000 years, or possibly longer,” says Nick Goldman. “As long as someone knows what the code is, you will be able to read it back if you have a machine that can read DNA.”

Although there are many practical aspects to solve, the inherent density and longevity of DNA makes it an attractive storage medium. The next step for the researchers is to perfect the coding scheme and explore practical aspects, paving the way for a commercially viable DNA storage model.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by European Molecular Biology Laboratory (EMBL).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Nick Goldman, Paul Bertone, Siyuan Chen, Christophe Dessimoz, Emily M. LeProust, Botond Sipos, Ewan Birney. Towards practical, high-capacity, low-maintenance information storage in synthesized DNA. Nature, 2013; DOI: 10.1038/nature11875

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, February 1, 2013

Smart search engines for news videos

Jan. 7, 2013 — Searching for video recordings regularly pushes search engines to their limit. The truth of the matter is that purely automatic algorithms are not enough; user knowledge has to be harnessed, too. Now, researchers are making automated engines smarter.

Anyone who has visited one of the big online video portals or TV broadcasters’ media libraries to search for a video clip is already familiar with the search engines tasked with seeking out and flagging video footage. However, these engines have their weaknesses. Their results are based on automatic search algorithms that often go by text-based information alone. Although they can be used to locate and identify videos, a comparison of individual sequences is still very difficult. To make search engines even smarter, the Fraunhofer Institute for Digital Media Technology IDMT in Ilmenau has developed a piece of software called “NewsHistory” that will now make full use of user knowledge as well. Researchers will be presenting an initial demonstration version of the smart video search engine at the CeBIT trade fair in Hannover.

Technology learns from users

“NewsHistory provides users with search algorithms, a data model and a web-based user interface so that they can locate identical sequences within various news videos,” explains Patrick Aichroth from Fraunhofer IDMT. He is responsible for coordinating the institute’s R&D work within the EU’s CUbRIK project. Here, researchers are harnessing user knowledge to optimize and extend the capabilities of automated analysis techniques. “The search engine learns from each individual user, allowing it to keep improving search results. Not only does this improve the quality of results, but the resources needed to undertake the analysis are also cut down,” Aichroth continues.

NewsHistory allows each user to add additional information to the results generated by the search engine, including production and broadcast date, sources and keywords for videos. It is also possible to rate the results. Finally, the user’s search itself is a source of information, providing data that is incorporated into the search engine; the metadata of a newly uploaded video, for instance, passes into the database.

“Comparing digital video data online or within video databases is very complex,” explains Christian Weigel from the Audio-Visual Systems research group at the IDMT. “Videos that share the same content have for the most part been edited, meaning that they are scaled and encoded in a variety of formats. Also, search engines are often unable to distinguish images cropped from a larger picture, lower thirds or the zoom shots so popular with US news channels.”

The demonstration version being presented at CeBIT will investigate how a selection of TV channels have made use of film footage, changed its form and broadcast it. The user interface displays commonalities and appraises them in graphic form. The search itself is conducted either by inputting text or by directly uploading individual video sequences. The researchers’ aim is to make the software sufficiently robust that it could also be used in the future to compare the multimedia content found on big online media portals. The scientists do not imagine archivists or journalists will be the only users. “NewsHistory is of particular interest to media and market researchers, say if they want to assess the televised political duels coming up this year,” concludes Weigel.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Fraunhofer-Gesellschaft.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here