Google Search

Wednesday, October 31, 2012

Fast algorithm extracts and compares document meaning

ScienceDaily (Sep. 25, 2012) — A computer program could compare two documents and work spot the differences in their meaning using a fast semantic algorithm developed by information scientists in Poland.

Writing in the International Journal of Intelligent Information and Database Systems, Andrzej Sieminski of the Technical University of Wroclaw, explains that extracting meaning and calculating the level of semantic similarity between two pieces of texts is a very difficult task, without human intervention. There have been various methods proposed by computer scientists for addressing this problem, but they all suffer from computational complexity, he says.

Sieminski has now attempted to reduce this complexity by merging a computationally efficient statistical approach to text analysis with a semantic component. Tests of the algorithm on English and Polish tests work well. The test set consisted of 4,890 English sentences with 142,116 words and 11,760 Polish sentences with 184,524 words scraped from online services via their newsfeeds over the course of five days. Sieminski points out that the complexity of the algorithm used on the Polish documents required an additional level of sophistication in terms of computing word means and disambiguation.

Traditional "manual" methods of indexing simply cannot now cope with the vast quantities of information generated on a daily basis by humanity as a whole in scientific research more specifically. The new algorithm once optimised could radically change the way in which we make archived documents searchable and allow knowledge to be extracted far more readily than is possible with standard indexing and search tools.

The approach also circumvents three critical problems faced by most users of conventional search engines: First, the lack of familiarity with the advanced search options of search engines, with a semantic algorithm advanced options become almost unnecessary. Secondly, the rigid nature of the options that are unable to catch the subtle nuance of user information needs, again a tool that understands the meaning of a search and the meaning of the results it offers avoids this problem. Finally, the unwillingness or unacceptably long time necessary to type a long query, semantically aware search will require only simply input.

Sieminski points out that the key virtue of the research is the idea of using the statistical similarity measures to assess semantic similarity. He explains that semantic similarity of words could be inferred from the WordNet database. He proposes using this database only during text indexing. "Indexing is done only once so the inevitably long processing time is not an issue," he says. "From that point on we use only statistical algorithms, which are fast and high performance."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Andrzej Sieminski. Fast algorithm for assessing semantic similarity of texts. Int. J. Intelligent Information and Database Systems, 2012, 6, 495-512

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, October 30, 2012

Space travel with a new language in tow

ScienceDaily (Oct. 1, 2012) — September 28, for the first time ever, SES, the Luxembourg-based satellite operator, has allowed an Ariane 5 rocket to transport a TV satellite into space, which is made by Astrium and runs entirely on latest generation software. Every single one of the programs used to operate the satellite was written in the new satellite language SPELL. The acronym stands for "Satellite Procedure Execution Language & Library."

What we are talking about here is a new standard, which will help the many different programming languages that were previously used to operate satellites and their subsystems to be unified under one roof. The University of Luxembourg's Interdisciplinary Centre for Security, Reliability and Trust (SnT) has contributed substantially to SPELL's being adopted in the operations of Astrium satellites. To this end, SnT scientists took an existing mathematical tool and refined it getting it ready for practical application, with whose help the procedures written in different native languages can now be translated into SPELL using a fully automated process.

SES is one of the world's biggest satellite operators with a vast fleet of satellites in orbit. The satellites and their technical components are produced by different manufacturers who each use their own programming language. "Because of the complete and utter lack of common standards up until now, we used to have to make a big production out of operation and maintenance of the machines," explains Martin Halliwell, Chief Technology Officer at SES. "Our operators were working with a number of different programming languages to help us control our SES fleet through space." Which is problematic as the machines don't easily forgive programming errors. Says Halliwell: "If a single error is made, it may result in our satellite getting lost in space. Which, for us, literally means incurring millions in losses."

Which is why SES decided a while ago now to develop the open-source software, SPELL. SPELL allows for the careful execution of every imaginable navigational procedure from any given ground control system for all potential satellites in the fleet. In other words, maximum flexibility with maximum security. "There is, however, a catch to the whole thing," concedes Dr. Frank Hermann, SnT scientist. "All the various control procedures that exist in different programming languages and that are being used must be converted over to SPELL. If that does not happen automatically and is one hundred percent error-free, it quickly turns into very resource-intensive and error-prone undertaking."

Together, SnT's Frank Hermann and his collegues, in close collaboration with SES automation specialists, have tackled the problem head-on using a methode known as triple graph transformation to automatically translate the programming languages employed by the new satellite's subsystems into the common language SPELL. According to Hermann, " triple graph transformation is a mathematical tool that has been the focus of active research since the 1990s. Along with other mathematical tools, it represents the ideal instrument for combining different programming languages under SPELL."

What's special about the new translation process is that it does not require any source code programming. "We are working with a visual development setting, which records translation rules in a graphic user interface," explains Hermann. These rules are automatically executed by specialized transformation tools. Quality assurance happens through consistency checks, which are automated as well. "Their efficacy has been documented through multiple formal mathematical proofs," says Hermann. If the translation runs smoothly, every piece of information from the original language is first converted into a graph. "This creates a network made up of many different nodes on the graphic interface," explains Hermann. The network is then read and translated into target graphs for the target language SPELL. "Every single bit of information in the original language has a corresponding SPELL counterpart."

The SES validation teams have confirmed that the translation is highly precise. "This was a prerequisite for being able to unanimously program our new satellite's systems using SPELL," says Martin Halliwell. SnT's Vice-Director, Prof. Thomas Engel, is very pleased with the SnT scientists' performance specifically and with the SES/SnT collaborative in general: "The new satellite and SPELL will now have to prove themselves in space. If everything runs smoothly -- which we are quite certain that it will -- our basic science research will have made an important contribution to increasing SES's performance and to making Luxembourg more competitive in this area."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Université du Luxembourg, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, October 29, 2012

Computers get a better way to detect threats

ScienceDaily (Sep. 20, 2012) — UT Dallas computer scientists have developed a technique to automatically allow one computer in a virtual network to monitor another for intrusions, viruses or anything else that could cause a computer to malfunction.

The technique has been dubbed "space travel" because it sends computer data to a world outside its home, and bridges the gap between computer hardware and software systems.

"Space travel might change the daily practice for many services offered virtually for cloud providers and data centers today, and as this technology becomes more popular in a few years, for the user at home on their desktop," said Dr. Zhiquian Lin, the research team's leader and an assistant professor of computer science in the Erik Jonsson School of Engineering and Computer Science.

As cloud computing is becoming more popular, new techniques to protect the systems must be developed. Since this type of computing is Internet-based, skilled computer specialists can control the main part of the system virtually -- using software to emulate hardware.

Lin and his team programmed space travel to use existing code to gather information in a computer's memory and automatically transfer it to a secure virtual machine -- one that is isolated and protected from outside interference.

"You have an exact copy of the operating system of the computer inside the secure virtual machine that a hacker can't compromise," Lin said. "Using this machine, then the user or antivirus software can understand what's happening with the space traveled computer setting off red flags if there is any intrusion.

Previously, software developer had to manually write such tools.

"With our technique, the tools already being used on the computer become part of the defense process," he said.

The gap between virtualized computer hardware and software operating on top of it was first characterized by Drs. Peter Chen and Brian Noble, faculty members from the University of Michigan.

"The ability to leverage existing code goes a long way in solving the gap problem inherent to many types of virtual machine services," said Chen, Arthur F. Thurnau Professor of Electrical Engineering and Computer Science, who first proposed the gap in 2001. "Fu and Lin have developed an interesting way to take existing code from a trusted system and automatically use it to detect intrusions."

Lin said the space travel technique will help the FBI understand what is happening inside a suspect's computer even if they are physically miles away, instead of having to buy expensive software.

Space travel was presented at the most recent IEEE Symposium on Security and Privacy. Lin developed this with Yangchun Fu, a research assistant in computer science.

"This is the top conference in cybersecurity, said Bhavani Thuraisingham, executive director of the UT Dallas Cyber Security Research and Education Center and a Louis A. Beecherl Jr. Distinguished Professor in the Jonsson School. "It is a major breakthrough that virtual developers no longer need to write any code to bridge the gap by using the technology invented by Dr. Lin and Mr. Fu. This research has given us tremendous visibility among the cybersecurity research community around the world."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Texas, Dallas.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Turn your dreams into music

ScienceDaily (Sep. 10, 2012) — Computer scientists in Finland have developed a method that automatically composes music out of sleep measurements.

Developed under Hannu Toivonen, Professor of Computer Science at the University of Helsinki, Finland, the software automatically composes synthetic music using data related to a person's own sleep as input.

The composition program is the work of Aurora Tulilaulu, a student of Professor Toivonen.

"The software composes a unique piece based on the stages of sleep, movement, heart rate and breathing. It compresses a night's sleep into a couple of minutes," she describes.

"We are developing a novel way of illustrating, or in fact experiencing, data. Music can, for example, arouse a variety of feelings to describe the properties of the data. Sleep analysis is a natural first application," Hannu Toivonen justifies the choice of the research topic.

The project utilises a sensitive force sensor placed under the mattress.

"Heartbeats and respiratory rhythm are extracted from the sensor's measurement signal, and the stages of sleep are deducted from them," says Joonas Paalasmaa, a postgraduate student in the Department of Computer Science. He designed the sleep stage software at Beddit, a company that provides services in the field.

The composition service is available online at http://sleepmusicalization.net/. The users of Beddit's service can have music composed from their own sleep, while others can listen to the compositions. The online service is the work of the fourth research team member, Mikko Waris.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Helsingin yliopisto (University of Helsinki), via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, October 28, 2012

Engineers built a supercomputer from 64 Raspberry Pi computers and Lego

ScienceDaily (Sep. 11, 2012) — Computational Engineers at the University of Southampton have built a supercomputer from 64 Raspberry Pi computers and Lego.

The team, led by Professor Simon Cox, consisted of Richard Boardman, Andy Everett, Steven Johnston, Gereon Kaiping, Neil O'Brien, Mark Scott and Oz Parchment, along with Professor Cox's son James Cox (aged 6) who provided specialist support on Lego and system testing.

Professor Cox comments: "As soon as we were able to source sufficient Raspberry Pi computers we wanted to see if it was possible to link them together into a supercomputer. We installed and built all of the necessary software on the Pi starting from a standard Debian Wheezy system image and we have published a guide so you can build your own supercomputer."

The racking was built using Lego with a design developed by Simon and James, who has also been testing the Raspberry Pi by programming it using free computer programming software Python and Scratch over the summer. The machine, named "Iridis-Pi" after the University's Iridis supercomputer, runs off a single 13 Amp mains socket and uses MPI (Message Passing Interface) to communicate between nodes using Ethernet. The whole system cost under £2,500 (excluding switches) and has a total of 64 processors and 1Tb of memory (16Gb SD cards for each Raspberry Pi). Professor Cox uses the free plug-in 'Python Tools for Visual Studio' to develop code for the Raspberry Pi.

Professor Cox adds: "The first test we ran -- well obviously we calculated Pi on the Raspberry Pi using MPI, which is a well-known first test for any new supercomputer."

"The team wants to see this low-cost system as a starting point to inspire and enable students to apply high-performance computing and data handling to tackle complex engineering and scientific challenges as part of our on-going outreach activities."

James Cox (aged 6) says: "The Raspberry Pi is great fun and it is amazing that I can hold it in my hand and write computer programs or play games on it."

If you want to build a Raspberry Pi Supercomputer yourself see: http://www.soton.ac.uk/~sjc/raspberrypi

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Southampton, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, October 27, 2012

Computers match humans in understanding art

ScienceDaily (Sep. 25, 2012) — Understanding and evaluating art has widely been considered as a task meant for humans, until now. Computer scientists Lior Shamir and Jane Tarakhovsky of Lawrence Technological University in Michigan tackled the question "can machines understand art?" The results were very surprising. In fact, an algorithm has been developed that demonstrates computers are able to "understand" art in a fashion very similar to how art historians perform their analysis, mimicking the perception of expert art critiques.

In the experiment, published in the recent issue of ACM Journal on Computing and Cultural Heritage, the researchers used approximately 1,000 paintings of 34 well-known artists, and let the computer algorithm analyze the similarity between them based solely on the visual content of the paintings, and without any human guidance. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.

The analysis showed that the computer was clearly able to identify the differences between classical realism and modern artistic styles, and automatically separated the painters into two groups, 18 classical painters and 16 modern painters. Inside these two broad groups the computer identified sub-groups of painters that were part of the same artistic movements. For instance, the computer automatically placed the High Renaissance artists Raphael, Leonardo Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were also clustered together by the algorithm, showing that the computer automatically identified that these painters share similar artistic styles.

The automatic computer analysis is in agreement with the view of art historians, who associate these three painters with the Baroque artistic movement. Similarly, the computer algorithm deduced that Gauguin and Cézanne, both considered post-impressionists, have similar artistic styles, and also identified similarities between the styles of Salvador Dali, Max Ernst, and Giorgio de Chirico, all are considered by art historians to be part of the surrealism school of art. Overall, the computer automatically produced an analysis that is in large agreement with the influential links between painters and artistic movements as defined by art historians and critiques.

While the average non-expert can normally make the broad differentiation between modern art and classical realism, they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism. The experiment showed that machines can outperform untrained humans in the analysis of fine art.

The experiment was performed by computing from each painting 4,027 numerical image context descriptors -- numbers that reflect the content of the image such as texture, color and shapes in a quantitative fashion. This allows the computer to reflect very many aspects of the visual content, and use pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles and then quantify these similarities.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Lawrence Technological University, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Lior Shamir, Jane A. Tarakhovsky. Computer analysis of art. Journal on Computing and Cultural Heritage, 2012; 5 (2): 1 DOI: 10.1145/2307723.2307726

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, October 26, 2012

Education: Get with the computer program

ScienceDaily (Oct. 5, 2012) — From email to Twitter, blogs to word processors, computer programs provide countless communications opportunities. While social applications have dominated the development of the participatory web for users and programmers alike, this era of Web 2.0 is applicable to more than just networking opportunities: it impacts education.

The integration of increasingly sophisticated information and communication tools (ICTs) is sweeping university classrooms. Understanding how learners and instructors perceive the effectiveness of these tools in the classroom is critical to the success or failure of their integration higher education settings. A new study led by Concordia University shows that when it comes to pedagogy, students prefer an engaging lecture rather than a targeted tweet.

Twelve universities across Quebec recently signed up to be a part of the first cross-provincial study of perceptions of ICT integration and course effectiveness on higher learning. This represented the first pan-provincial study to assess how professors are making the leap from lectures to LinkedIn -- and whether students are up for the change to the traditional educational model.

At the forefront of this study was Concordia's own Vivek Venkatesh. As associate dean of academic programs and development within the School of Graduate Studies, he has a particular interest in how education is evolving within post-secondary institutions. To conduct the study, Venkatesh partnered with Magda Fusaro from UQAM's Department of Management and Technology. Together, they conducted a pilot project at UQAM before rolling the project out to universities across the province.

"We hit the ground running and received an overwhelmingly positive response with 15,020 students and 2,640 instructors responding to our electronic questionnaires in February and March of 2011," recalls Venkatesh. The 120-item surveys gauged course structure preferences, perceptions of the usefulness of teaching methods, and the level of technology knowledge of both students and teachers.

The surprising results showed that students were more appreciative of the literally "old school" approach of lectures and were less enthusiastic than teachers about using ICTs in classes. Instructors were more fluent with the use of emails than with social media, while the opposite was true for students.

"Our analysis showed that teachers think that their students feel more positive about their classroom learning experience if there are more interactive, discussion-oriented activities. In reality, engaging and stimulating lectures, regardless of how technologies are used, are what really predict students' appreciation of a given university course," explains Fusaro.

The researchers hope these results will have a broad impact, especially in terms of curriculum design and professional development. For Venkatesh, "this project represents a true success story of collaboration across Québec universities that could definitely have an effect outside the province." Indeed, the large number of participants involved means this research is applicable to populations of learners across North America and Europe with similar educational and information technology infrastructures. An electronic revolution could soon sweep post-secondary classrooms around the world, thanks to this brand new research from Quebec.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Concordia University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Kamran Shaikh, Vivek Venkatesh, Tieja Thomas, Kathryn Urbaniak, Timothy Gallant, David I., Amna Zuberi. Technological Transparency in the Age of Web 2.0: A Case Study of Interactions in Internet-Based Forums. InTech, 2012 DOI: 10.5772/29082

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Perfecting email security

ScienceDaily (Sep. 10, 2012) — Millions of us send billions of emails back and forth each day without much concern for their security. On the whole, security is not a primary concern for most day-to-day emails, but some emails do contain personal, proprietary and sensitive information, documents, media, photos, videos and sound files. Unfortunately, the open nature of email means that they can be intercepted and if not encrypted easily read by malicious third parties. Even with the PGP -- pretty good privacy -- encryption scheme first used in 1995, if a sender's private "key" is compromised all their previous emails encrypted with that key can be exposed.

Writing in the International Journal of Security and Networks, computer scientists Duncan Wong and Xiaojian Tian of City University of Hong Kong, explain how previous researchers had attempted to define perfect email privacy that utilizes PGP by developing a technique that would preclude the decryption of other emails should a private key be compromised. Unfortunately, say Wong and Tian this definition fails if one allows the possibility that the email server itself may be compromised, by hackers or other malicious users.

The team has now defined perfect forward secrecy for email as follows and suggested a technical solution to enable email security to be independent of the server used to send the message: "An e-mail system provides perfect forward secrecy if any third party, including the e-mail server, cannot recover previous session keys between the sender and the recipient even if the long-term secret keys of the sender and the recipient are compromised."

By building a new email protocol on this principle, the team suggests that it is now possible to exchange emails with almost zero risk of interference from third parties. "Our protocol provides both confidentiality and message authentication in addition to perfect forward secrecy," they explain.

The team's protocol involves Alice sending Bob an encrypted email with the hope that Charles will not be able to intercept and decrypt the message. Before the email is encrypted and sent the protocol suggested by Wong and Tian has Alice's computer send an identification code to the email server. The server creates a random session "hash" that is then used to encrypt the actual encryption key for the email Alice is about to send. Meanwhile, Bob as putative recipient receives the key used to create the hash and bounces back an identification tag. This allows Alice and Bob to verify each other's identity.

These preliminary steps are all automatically and without Alice or Bob needing to do anything in advance. Now, Alice writes her email, encrypts it using PGP and then "hashes" it using the random key from the server. When Bob receives the encrypted message he uses his version of the hash to unlock the container within which the PGP-encrypted email sits. Bob then uses Alice's public PGP key to decrypt the message itself. No snoopers on the internet between Alice and Bob, not even the email server ever have access to the PGP encrypted email in the open. Moreover, because a different key is used to lock up the PGP encrypted email with a second one-time layer, even if the PGP security is compromised past emails created with the same key cannot be unlocked.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience Publishers, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Duncan S. Wong, Xiaojian Tian. E-mail protocols with perfect forward secrecy. International Journal of Security and Networks, 2012; 7 (1): 1 DOI: 10.1504/IJSN.2012.048491

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, October 24, 2012

Disaster is just a click away: Computer scientist, psychologist look at developing visual system to warn Internet users of safety risks

ScienceDaily (Sep. 11, 2012) — A Kansas State University computer scientist and psychologist are developing improved security warning messages that prompt users to go with their gut when it comes to making a decision online.

Eugene Vasserman, assistant professor of computing and information sciences, and Gary Brase, associate professor of psychology, are researching how to help computer users who have little to no computer experience improve their Web browsing safety without security-specific education. The goal is to keep users from making mistakes that could compromise their online security and to inform them when a security failure has happened.

"Security systems are very difficult to use, and staying safe online is a growing challenge for everyone," Vasserman said. "It is especially devastating to inexperienced computer users, who may not spot risk indicators and may misinterpret currently implemented textual explanations and visual feedback of risk."

Vasserman, whose expertise is in building secure networked systems, and Brase, who studies decision-making and the rationality behind people's choices, are developing a simple visual messaging system that would show novice computer users an easily understandable, relatable warning regarding their security decisions. These could be a choice to visit a website with an expired security certificate, or a website that is know to contain malware, among other online dangers. The idea is to have users make a gut reaction decision based on the message.

"The challenge is to get people to make the right decision," Vasserman said. "For example, sometimes a browser will show a dialog box saying this website has an expired SSL certificate, and sometimes the safer behavior is for people to still proceed and accept the expired certificate. But sometimes a website can pose a serious threat. We want people to make good choices without having to understand the technical detail, but we don't want to make the choice for them; we want to show them the importance and danger level of that choice."

Their project, "Education-optional Security Usability on the Internet," was recently awarded nearly $150,000 by the National Science Foundation. Researchers are using the funding to develop, test and evaluate the effectiveness of new and existing educational tools to find which ones case users to make better online security choices.

This system should minimize the use of traditional text warnings and icons, according to Vasserman.

The messaging system created will also likely be used in a medical project that Vasserman and colleagues are developing. The researchers are designing a secure network for hospitals and doctors' offices so medical devices can communicate with each other to monitor and relay information about a patient's health. Having a system that shows instantaneously recognizable consequences could be helpful to physicians and hospital engineers, who are not familiar with cybersecurity, make a correct decision quickly about what to do with a medical device that has a security problem.

"Presenting bad things with some sort of visual image is tricky because you want to convey to the user that this is not good, but you also don't want to traumatize them," Vasserman said. "For example, some people are terrified of snakes so that may be too intense of an image to use. When this is applied to a medical environment you have to especially conscious, so there are more considerations."

Prior to collaborating with Brase, Vasserman and Sumeet Gujrati, a doctoral candidate in computing and information sciences, tested the effectiveness of textual and visual communication for security messages and workflows.

Researchers spent more than 90 hours collecting data by observing volunteers use a piece of popular software that encrypts files on a computer.

The on-screen instructions asked users to select a location to store the encrypted files, but users often selected an existing file due to the phrasing of the instructions. This prompted an on-screen warning message stating that the selected file would be erased and all of the information inside of it would be lost. Users then had to decide to continue and erase the file or cancel the process and start over.

"I sat in the room many times and watched as people read the warning message carefully, sometimes even re-reading it, and then watched as they clicked on 'yes' and destroyed the file," Vasserman said. "Because the information being conveyed to them in the message was not immediately clear, many users specifically deleted the file they wanted to protect. I see that as an indicator that a text warning is not effective at getting users to make the correct choice."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Kansas State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

App protects Facebook users from hackers

ScienceDaily (Oct. 8, 2012) — Cyber-crime is expanding to the fertile grounds of social networks and University of California, Riverside engineers are fighting it.

A recent four-month experiment conducted by several UC Riverside engineering professors and graduate students found that the application they created to detect spam and malware posts on Facebook users' walls was highly accurate, fast and efficient.

The researchers also introduced the new term "socware" -- pronounced "sock-where" -- to describe a combination of "social malware," encompassing all criminal and parasitic behavior on online social networks.

Their free application, MyPageKeeper, successfully flagged 97 percent of socware during the experiment. In addition, it was only incorrect -- flagging posts of socware that did not fit into those categories -- 0.005 percent of the time.

The researchers also found that it took an average of .0046 seconds to classify a post, which is far quicker than the 1.9 seconds it takes using the traditional approach of web site crawling. MyPageKeeper's more efficient classification also translates to lower costs, cutting expenses by up to 40 times.

"This is really the perfect recipe for socware detection to be viable at scale: high accuracy, fast, and cheap," said Harsha V. Madhyastha, an assistant professor of computer science and engineering at UC Riverside's Bourns College of Engineering.

Madhyastha conducted the research with Michalis Faloutsos, a professor of computer science and engineering, and Md Sazzadur Rahman and Ting-Kai Huang, both Ph.D. students. Rahman presented the paper outlining the findings at the recent USENIX Security Symposium 2012.

During the four-month experiment, which was conducted from June to October 2011, the researchers analyzed more than 40 million posts from 12,000 people who installed MyPageKeeper. They found that 49 percent of users were exposed to at least one socware post during the four months.

"This is really an arms race with hackers," said Faloutsos, who has studied web security for more than 15 years. "In many ways, Facebook has replaced e-mail and web sites. Hackers are following that same path and we need new applications like MyPageKeeper to stop them."

The application, which is already attracting commercial interest, works by continuously scanning the walls and news feeds of subscribed users, identifying socware posts and alerting the users. In the future, the researchers are considering allowing MyPageKeeper to remove malicious posts automatically.

The key novelty of the application is that it factors in the "social context" of the post. Social context includes the words in the post and the number of "likes" and comments it received.

For example, the researchers determined that the presence of words -- such as 'FREE,' 'Hurry,' 'Deal' and 'Shocked' -- provide a strong indication of the post being spam. They found that the use of six of the top 100 keywords is sufficient to detect socware.

The researchers point out that users are unlikely to 'like' or comment on socware posts because they add little value. Hence, fewer likes or comments are also an indicator of socware.

Furthermore, MyPageKeeper checks URLs against domain lists that have been identified as being responsible for spam, phishing or malware. Any URL that matches is classified as socware.

During the four-month experiment, the researchers also found:

A consistently large number of socware notifications are sent every day, with noticeable spikes on a few days. For example, 4,056 notifications were sent on July 11, 2011, which corresponded to a scam that went viral conning users into completing surveys with the pretext of fake free products.Only 54 percent of socware links have been shortened by URL shorteners such as bit.ly and tinyurl.com. The researchers thought this number would be higher because URL shorteners allow the web site address to be hidden. They also found that many scams use somewhat obviously "fake" domain names, such as http://iphonefree5. com and http://nfljerseyfree. com, but users seem to fall for it and click the link.Certain words are much more likely to be found in Facebook socware than in e-mail spam. For example, "omg" is 332 times more likely to appear in Facebook socware. Meanwhile, "bank" is 56 times more likely to appear in e-mail spam.Twenty percent of socware links are hosted inside of Facebook.

This activity is so high that the researchers expect that Facebook will have to do more to protect its users against socware.

"Malware on Facebook seems to be hosted and enabled by Facebook itself," Faloutsos said. "It's a classic parasitic kind of behavior. It is fascinating and sad at the same time."

App: https://apps.facebook.com/mypagekeeper/

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Riverside. The original article was written by Sean Nealon.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, October 23, 2012

Tactile glove provides subtle guidance to objects in vicinity

ScienceDaily (Oct. 10, 2012) — Researchers at HIIT and Max Planck Institute for Informatics have shown how computer-vision based hand-tracking and vibration feedback on the user's hand can be used to steer the user's hand toward an object of interest. A study shows an almost three-fold advantage in finding objects from complex visual scenes, such as library or supermarket shelves.

Finding an object from a complex real-world scene is a common yet time-consuming and frustrating chore. What makes this task complex is that humans' pattern recognition capability reduces to a serial one-by-one search when the items resemble each other.

Researchers from the Helsinki Institute for Information Technology HIIT and the Max Planck Institute for Informatics have developed a prototype of a glove that uses vibration feedback on the hand to guide the user's hand towards a predetermined target in 3D space. The glove could help users in daily visual search tasks in supermarkets, parking lots, warehouses, libraries etc.

The main researcher, Ville Lehtinen of HIIT, explains "the advantage of steering a hand with tactile cues is that the user can easily interpret them in relation to the current field of view where the visual search is operating. This provides a very intuitive experience, like the hand being 'pulled' toward the target."

The solution builds on inexpensive off-the-shelf components such as four vibrotactile actuators on a simple glove and a Microsoft Kinect sensor for tracking the user's hand. The researchers published a dynamic guidance algorithm that calculates effective actuation patterns based on distance and direction to the target.

In a controlled experiment, the complexity of the visual search task was increased by adding distractors to a scene. "In search tasks where there were hundreds of candidates but only one correct target, users wearing the glove were consistently faster, with up to three times faster performance than without the glove," says Dr. Antti Oulasvirta from Max Planck Institute for Informatics.

Dr. Petteri Nurmi from HIIT adds: "This level of improvement in search performance justifies several practical applications. For instance, warehouse workers could have gloves that guide them to target shelfs, or a pedestrian could navigate using this glove. With the relatively inexpensive components and the dynamic guidance algorithm, others can easily build their own personal guidance systems."

The research paper will be presented at the 25th ACM Symposium on User Interface Software and Technology UIST'12 in Boston, MA, USA on 7/10 October, 2012.

.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Helsingin yliopisto (University of Helsinki), via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, October 22, 2012

Negative effects of computerized surveillance at home: Cause of annoyance, concern, anxiety, and even anger

ScienceDaily (Oct. 8, 2012) — To understand the effects of continuous computerized surveillance on individuals, a Finnish research group instrumented ten Finnish households with video cameras, microphones, and logging software for personal computers, wireless networks, smartphones, TVs, and DVDs. The twelve participants filled monthly questionnaires to report on stress levels and were interviewed at six and twelve months. The study was carried out by Helsinki Institute for Information Technology HIIT, a joint research institute of Aalto University and the University of Helsinki, Finland.

The results expose a range of negative changes in experience and behavior. To all except one participant, the surveillance system proved to be a cause of annoyance, concern, anxiety, and even anger. However, surveillance did not cause mental health issues comparable in severity to depression or alcoholism, when measured with a standardized scale. Nevertheless, one household dropped out of the study at six months, citing that the breach of privacy and anonymity had grown unbearable.

The surveillees' privacy concerns plateaued after about three months, as the surveillees got more used to surveillance. The researchers attribute this to behavioral regulation of privacy. Almost all subjects exhibited changes in behavior to control what the system perceives. Some hid their activities in the home from the sensors, while some transferred them to places outside the home. Dr. Antti Oulasvirta explains: -- Although almost all were capable of adapting their daily practices to maintain privacy intrusion at a level they could tolerate, the required changes made the home fragile. Any unpredicted social event would bring the new practices to the fore and question them, and at times prevent them from taking place.

The researchers were surprised that computer logging was as disturbing as camera-based surveillance. On the one hand, logging the computer was experienced negatively because it breaches the anonymity of conversations. -- The importance of anonymity in computer use is symptomatic of the fact that a large proportion of our social activities today are mediated by computers, Oulasvirta says.

On the other hand, the ever-observing "eye," the video camera, deprived the participants of the solitude and isolation they expect at home. The surveillees felt particularly strong the violation of reserve and intimacy by the capture of nudity, physical appearance, and sex. -- Psychological theories of privacy have postulated six privacy functions of the home, and we find that computerized surveillance can disturb all of them, Oulasvirta concludes.

More experimental research is needed to reveal the effects of computerized surveillance. Prof. Petri Myllymäki explains: -- Because the topic is challenging to study empirically, there is hardly any published research on the effects of intrusive surveillance on everyday life. In the Helsinki Privacy Experiment project, we did rigorous ethical and legal preparations, and invested into a robust technical platform, in order to allow a longitudinal field experiment of privacy. The present sample of subjects is potentially biased, as it was selected from people who volunteered based on an Internet advertisement. While we realize the limits of our sample, our work can facilitate further inquiries into this important subject.

The first results were presented at the 14th International Conference on Ubiquitous Computing (Ubicomp 2012) in Pittsburgh, PA, USA.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Aalto University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, October 21, 2012

Artificially intelligent game bots pass the Turing test on Turing's centenary

ScienceDaily (Sep. 26, 2012) — An artificially intelligent virtual gamer created by computer scientists at The University of Texas at Austin has won the BotPrize by convincing a panel of judges that it was more human-like than half the humans it competed against.

The competition was sponsored by 2K Games and was set inside the virtual world of "Unreal Tournament 2004," a first-person shooter video game. The winners were announced this month at the IEEE Conference on Computational Intelligence and Games.

"The idea is to evaluate how we can make game bots, which are nonplayer characters (NPCs) controlled by AI algorithms, appear as human as possible," said Risto Miikkulainen, professor of computer science in the College of Natural Sciences. Miikkulainen created the bot, called the UT^2 game bot, with doctoral students Jacob Schrum and Igor Karpov.

The bots face off in a tournament against one another and about an equal number of humans, with each player trying to score points by eliminating its opponents. Each player also has a "judging gun" in addition to its usual complement of weapons. That gun is used to tag opponents as human or bot.

The bot that is scored as most human-like by the human judges is named the winner. UT^2, which won a warm-up competition last month, shared the honors with MirrorBot, which was programmed by Romanian computer scientist Mihai Polceanu.

The winning bots both achieved a humanness rating of 52 percent. Human players received an average humanness rating of only 40 percent. The two winning teams will split the $7,000 first prize.

The victory comes 100 years after the birth of mathematician and computer scientist Alan Turing, whose "Turing test" stands as one of the foundational definitions of what constitutes true machine intelligence. Turing argued that we will never be able to see inside a machine's hypothetical consciousness, so the best measure of machine sentience is whether it can fool us into believing it is human.

"When this 'Turing test for game bots' competition was started, the goal was 50 percent humanness," said Miikkulainen. "It took us five years to get there, but that level was finally reached last week, and it's not a fluke."

The complex gameplay and 3-D environments of "Unreal Tournament 2004" require that bots mimic humans in a number of ways, including moving around in 3-D space, engaging in chaotic combat against multiple opponents and reasoning about the best strategy at any given point in the game. Even displays of distinctively human irrational behavior can, in some cases, be emulated.

"People tend to tenaciously pursue specific opponents without regard for optimality," said Schrum. "When humans have a grudge, they'll chase after an enemy even when it's not in their interests. We can mimic that behavior."

In order to most convincingly mimic as much of the range of human behavior as possible, the team takes a two-pronged approach. Some behavior is modeled directly on previously observed human behavior, while the central battle behaviors are developed through a process called neuroevolution, which runs artificially intelligent neural networks through a survival-of-the-fittest gauntlet that is modeled on the biological process of evolution.

Networks that thrive in a given environment are kept, and the less fit are thrown away. The holes in the population are filled by copies of the fit ones and by their "offspring," which are created by randomly modifying (mutating) the survivors. The simulation is run for as many generations as are necessary for networks to emerge that have evolved the desired behavior.

"In the case of the BotPrize," said Schrum, "a great deal of the challenge is in defining what 'human-like' is, and then setting constraints upon the neural networks so that they evolve toward that behavior.

"If we just set the goal as eliminating one's enemies, a bot will evolve toward having perfect aim, which is not very human-like. So we impose constraints on the bot's aim, such that rapid movements and long distances decrease accuracy. By evolving for good performance under such behavioral constraints, the bot's skill is optimized within human limitations, resulting in behavior that is good but still human-like."

Miikkulainen said that methods developed for the BotPrize competition should eventually be useful not just in developing games that are more entertaining, but also in creating virtual training environments that are more realistic, and even in building robots that interact with humans in more pleasant and effective ways.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Texas at Austin, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, October 20, 2012

Popularity versus similarity: A balance that predicts network growth

ScienceDaily (Sep. 13, 2012) — Do you know who Michael Jackson or George Washington was? You most likely do: they are what we call "household names" because these individuals were so ubiquitous. But what about Giuseppe Tartini or John Bachar?

That's much less likely, unless you are a fan of Italian baroque music or free solo climbing.

In that case, you would have heard of Bachar just as likely as Washington. The latter was popular, while the former was not as popular but had interests similar to yours.

A new paper published this week in the science journal Nature by the Cooperative Association for Internet Data Analysis (CAIDA), based at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, explores the concept of popularity versus similarity, and if one more than the other fuels the growth of a variety of networks, whether it is the Internet, a social network of trust between people, or a biological network.

The researchers, in a study called "Popularity Versus Similarity in Growing Networks", show for the first time how networks evolve optimizing a unique trade-off between popularity and similarity. They found that while popularity attracts new connections, similarity is just as attractive.

"Popular nodes in a network, or those that are more connected than others, tend to attract more new connections in growing networks," said Dmitri Krioukov, co-author of the Nature paper and a research scientist with SDSC's CAIDA group, which studies the practical and theoretical aspects of the Internet and other large networks. "But similarity between nodes is just as important because it is instrumental in determining precisely how these networks grow. Accounting for these similarities can help us better predict the creation of new links in evolving networks."

In the paper, Krioukov and his colleagues, which include network analysis experts from academic institutions in Cyprus and Spain, describe a new model that significantly increases the accuracy of network evolution prediction by considering the trade-offs between popularity and similarity. Their model describes large-scale evolution of three kinds of networks: technological (the Internet), social (a network of trust relationships between people), and biological (a metabolic network of the Escherichia coli, typically harmlessly found in the human gastrointestinal tract, though some strains can cause diarrheal diseases.)

The researchers write that the model's ability to predict links in networks may find applications ranging from predicting protein interactions or terrorist connections to improving recommender and collaborative filtering systems, such as Netflix or Amazon product recommendations.

"On a more general note, if we know the laws describing the dynamics of a complex system, then we not only can predict its behavior, but we may also find ways to better control it," added Krioukov.

In establishing connections in networks, nodes optimize certain trade-offs between the two dimensions of popularity and similarity, according to the researchers. "These two dimensions can be combined or mapped into a single space, and this mapping allows us to predict the probability of connections in networks with a remarkable accuracy," said Krioukov. "Not only can we capture all the structural properties of three very different networks, but also their large-scale growth dynamics. In short, these networks evolve almost exactly as our model predicts."

Many factors contribute to the probability of connections between nodes in real networks. In the Internet, for example, this probability depends on geographic, economic, political, technological, and many other factors, many of which are un-measurable or even unknown.

"The beauty of the new model is that it accounts for all of these factors, and projects them, properly weighted, into a single metric, while allowing us to predict the probability of new links with a high degree of precision," according to Krioukov.

The other researchers who worked on this project are Fragkiskos Papadopoulos, Department of Electrical Engineering, Computer Engineering and Informatics, Cyprus University of Technology in Cyprus; Maksim Kitsak, CAIDA/SDSC/UC San Diego; M. Ángeles Serrano and Marián Boguñá, Departament de Fisica Fonamental, Univsitat de Barcelona, in Spain.

This research was supported by a variety of grants, including National Science Foundation (NSF) grants CNS-0964236, CNS-1039646, and CNS-0722070; Department of Homeland Security (DHS) grant N66001-08-C-2029; Defense Advanced Research Projects Agency (DARPA) grant HR0011-12-1-0012; and support from Cisco Systems.

International support was provided by a Marie Curie International Reintegration Grant within the 7th European Community Framework Programme; Office of the Ministry of Economy and Competitiveness, Spain (MICINN) projects FIS2010-21781-C02-02 and BFU2010-21847-C02-02; Generalitat de Catalunya grant 2009SGR838; the Ramón y Cajal program of the Spanish Ministry of Science; and the Catalan Institution for Research and Advanced Studies (ICREA) Academia prize 2010, funded by the Generalitat de Catalunya, Spain.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California, San Diego. The original article was written by Jan Zverina.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Fragkiskos Papadopoulos, Maksim Kitsak, M. Ángeles Serrano, Marián Boguñá, Dmitri Krioukov. Popularity versus similarity in growing networks. Nature, 2012; DOI: 10.1038/nature11459

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, October 18, 2012

Training computers to understand the human brain

ScienceDaily (Oct. 5, 2012) — Tokyo Institute of Technology researchers use fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.

Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can 'think' and 'see' in the same way as humans. Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.

The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently 'label' each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool).

After 'training' the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Tokyo Institute of Technology, via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Hiroyuki Akama, Brian Murphy, Li Na, Yumiko Shimizu, Massimo Poesio. Decoding semantics across fMRI sessions with different stimulus modalities: a practical MVPA study. Frontiers in Neuroinformatics, 2012; 6 DOI: 10.3389/fninf.2012.00024

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Computer, read my lips: Emotion detector developed using a genetic algorithm

ScienceDaily (Sep. 10, 2012) — A computer is being taught to interpret human emotions based on lip pattern, according to research published in the International Journal of Artificial Intelligence and Soft Computing. The system could improve the way we interact with computers and perhaps allow disabled people to use computer-based communications devices, such as voice synthesizers, more effectively and more efficiently.

Karthigayan Muthukaruppanof Manipal International University in Selangor, Malaysia, and co-workers have developed a system using a genetic algorithm that gets better and better with each iteration to match irregular ellipse fitting equations to the shape of the human mouth displaying different emotions. They have used photos of individuals from South-East Asia and Japan to train a computer to recognize the six commonly accepted human emotions -- happiness, sadness, fear, angry, disgust, surprise -- and a neutral expression. The upper and lower lip is each analyzed as two separate ellipses by the algorithm.

"In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers especially in the area of human emotion recognition by observing facial expression," the team explains. Earlier researchers have developed an understanding that allows emotion to be recreated by manipulating a representation of the human face on a computer screen. Such research is currently informing the development of more realistic animated actors and even the behavior of robots. However, the inverse process in which a computer recognizes the emotion behind a real human face is still a difficult problem to tackle.

It is well known that many deeper emotions are betrayed by more than movements of the mouth. A genuine smile for instance involves flexing of muscles around the eyes and eyebrow movements are almost universally essential to the subconscious interpretation of a person's feelings. However, the lips remain a crucial part of the outward expression of emotion. The team's algorithm can successfully classify the seven emotions and a neutral expression described.

The researchers suggest that initial applications of such an emotion detector might be helping disabled patients lacking speech to interact more effectively with computer-based communication devices, for instance.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience Publishers, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

M. Karthigayan, R. Nagarajan, M. Rizon, Sazali Yaacob. Lip pattern in the interpretation of human emotions. International Journal of Artificial Intelligence and Soft Computing, 2012; 3 (2): 95 DOI: 10.1504/IJAISC.2012.049004

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

An operating system in the cloud: TransOS could displace conventional desktop operating systems

ScienceDaily (Oct. 9, 2012) — A new-cloud based operating system for all kinds of computer is being developed by researchers in China. Details of the TransOS system are reported in a forthcoming special issue of the International Journal of Cloud Computing.

Computer users are familiar to different degrees with the operating system that gets their machines up and running, whether that is the Microsoft Windows, Apple Mac, Linux, ChromeOS or other operating system. The OS handles the links between hardware, the CPU, memory, hard drive, peripherals such as printers and cameras as well as the components that connect the computer to the Internet, critically it also allows the user to run the various bits of software and applications they need, such as their email programs, web browsers, word processors, spreadsheets and games.

While, operating systems seem firmly entrenched in the personal computer and their files, documents, movies, sounds and images, sit deep within the hard drive. Traditionally, software too is stored on the same hard drive for quick access to the programs a user needs at any given time. However, there is a growing movement that is taking the applications off the personal hard drive and putting them "in the cloud."

The user connects to the Internet and "runs" the software as and when needed from a cloud server, perhaps even storing their files in the cloud too. This has numerous advantages for the user. First, the software can be kept up to date automatically without their intervention. Secondly, the software is independent of the hardware and operating system and so can be run from almost any computer with an Internet connection. Thirdly, if the user files are also in the cloud, then they can access and use their files anywhere in the world with a network connection and at any time.

The obvious next step is to make the entire process transparent by stripping the operating system from the computer and putting that in the cloud. The computer then becomes a sophisticated, but dummy terminal and its configuration and capabilities become irrelevant to how the user interacts with their files. Already most types of software are represented in the cloud by alternative or additional versions of their desktop equivalents but we are yet to see a fully functional cloud-based OS. For instance, systems such as Java were developed to allow applications to run in a web browser irrespective of the computer or operating system on which that browser was running.

Now, Yaoxue Zhang and Yuezhi Zhou of Tsinghua University, in Beijing, China, have at last developed an operating system for the cloud -- TransOS. The operating system code is stored on a cloud server and allows a connection from a bare terminal computer. The terminal has a minimal amount of code that boots it up and connects it to the Internet dynamically. TransOS then downloads specific pieces of code that offer the user options as if they were running a conventional operating system via a graphical user interface. Applications are then run, calling on the TransOS code only as needed so that memory is not hogged by inactive operating system code as it is by a conventional desktop computer operating system.

"TransOS manages all the resources to provide integrated services for users, including traditional operating systems," the team says. "The TransOS manages all the networked and virtualized hardware and software resources, including traditional OS, physical and virtualized underlying hardware resources, and enables users can select and run any service on demand," the team says

The researchers suggest that TransOS need not be limited to personal computers, but offers the capacity to be enabled on other domestic (refrigerators and washing machines, for instance) and factory equipment. The concept should also work well with mobile devices, such as phones and tablet PCs. It is essential, the team adds, that a cloud operating system architecture and relevant interface standards now be established to allow TransOS to be developed for a vast range of applications.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience Publishers, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Yaoxue Zhang, Yuezhi Zhou. TransOS: a transparent computing-based operating system for the cloud. Int. J. Cloud Computing, 2012, 1, 287-301

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, October 16, 2012

A network to guide the future of computing

ScienceDaily (Sep. 13, 2012) — Moore's Law, the observation by Intel co-founder Gordon E. Moore that the number of transistors on a chip doubles approximately every two years, has been accurate for half a century. As a result, we now carry more processing power in the mobile phones in our pockets than could fit in a house-sized computer in the 1960s. But by around 2020 Moore's Law will start to reach its limits: the laws of physics will eventually pose a barrier to higher transistor density, but other factors such as heat, energy consumption and cost look set to slow the increase in performance even sooner.

At the same time, the world is in the midst of a data explosion, with humans and machines generating, storing, sharing and accessing ever increasing amounts of data in many different forms, on a multitude of different devices that require more energy-efficient, higher-performance processors.

How can computing systems, now facing a post-Moore era, meet this ever growing demand?

It is an open-ended question, but one that European researchers are working hard to answer, thanks in large measure to the efforts of HiPEAC (1), a 'Network of excellence' from academia and industry that has been helping to steer European computing systems research since 2004. Currently in its third incarnation, supported over four years by EUR 3.8 million in funding from the European Commission, the project has become the most visible and far-reaching computing systems network in Europe.

'HiPEAC was set up with three main goals: to bring together academia and industry, to bring together hardware and software developers and to create a real, visible computer systems community in Europe. On those fronts, and many others, we have undoubtedly succeeded,' says Koen De Bosschere, professor of the computer systems lab of Ghent University in Belgium and coordinator of the HiPEAC network.

HiPEAC's conferences and networking events are now attended by hundreds of academic researchers and industry representatives from Europe and beyond; the network's summer schools, workshops and exchange grants between universities are helping train researchers in new and emerging areas of computing systems theory and technology; and the project's biannual roadmap has become a guideline for both the public and private sector as to where research funding should be channelled.

'We now have a portfolio of between 30 and 40 computer systems projects that we are working with. The researchers involved come to our events, which have become one of the sector's main networking opportunities, and several projects have actually emerged from people meeting at our conferences,' Prof. De Bosschere notes.

He points, for example, to the EuroCloud project, which began in 2010 with the support of EUR 3.3 million in funding from the European Commission. Coordinated by microprocessor designer ARM in the United Kingdom, the project is developing on-chip servers using multiple ARM cores and integrating 3D DRAM with the aim of reducing energy consumption and costs at data centres by as much as 90 %.

The idea for the project first arose at the HiPEAC conference in Cyprus in 2009, Prof. De Bosschere notes. 'These kinds of networking opportunities are really showing their worth in spurring collaboration and innovation."

A roadmap of challenges and opportunities for Europe

Meanwhile, the HiPEAC Roadmap, a new edition of which is due to be published this year, has become something of a guidebook for the future of computing systems research in Europe.

'We didn't really set out doing it with that aim in mind, but the Commission took notice of it, consulted with industry on it, found the challenges we had identified to be accurate and started to use it to focus research funding,' the HiPEAC coordinator explains. 'Since we produced the first edition in 2008, EU funding in the sector has almost tripled and the next call will offer around EUR 70 million.'

For the short and medium term, the latest edition of the HiPEAC report concludes that specialising computing devices is the most promising but difficult path for dramatically improving the performance of future computing systems. In this light, HiPEAC has identified seven concrete research objectives -- from energy efficiency to system complexity and reliability -- related to the design and the exploitation of specialised heterogeneous systems. But in the longer term, the HiPEAC researchers say it will be critical to pursue research directions that break with classical systems, and their traditional hardware/software boundary, by investigating new devices and new computing paradigms, such as bio-inspired systems, stochastic computing and swarm computing.

'We can only go so far by following current trends and approaches, but in the long run we will nonetheless want and require more processing power that is more reliable, consumes less energy, produces less heat and can fit into smaller devices. More processing power means more applications and entirely new markets -- just look at what's happened with smartphones and tablet computers over the last five years,' Prof. De Bosschere says. 'For industry, it means that today, instead of a person having just one desktop or laptop computer, they may have three or four devices.'

And, in the future, he sees ever higher-performance devices doing much more than is possible or even imaginable today: bio-inspired neural networks powering data mining applications at 1 % of the energy consumption of today's data centres, for example, or smartphones that can analyse a blood sample, sequence the DNA and detect a virus in a few minutes, rather than the days it takes using laboratory computer systems at present.

'The potential applications for computing technology in almost every aspect of life are almost endless -- we just need to make sure we have the processing power to run them,' he says.

HiPEAC received research funding under the European Union's Seventh Framework Programme.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by CORDIS Features, formerly ICT Results.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, October 15, 2012

Robots using tools: Researchers aim to create 'MacGyver' robot

ScienceDaily (Oct. 9, 2012) — Robots are increasingly being used in place of humans to explore hazardous and difficult-to-access environments, but they aren't yet able to interact with their environments as well as humans. If today's most sophisticated robot was trapped in a burning room by a jammed door, it would probably not know how to locate and use objects in the room to climb over any debris, pry open the door, and escape the building.

A research team led by Professor Mike Stilman at the Georgia Institute of Technology hopes to change that by giving robots the ability to use objects in their environments to accomplish high-level tasks. The team recently received a three-year, $900,000 grant from the Office of Naval Research to work on this project.

"Our goal is to develop a robot that behaves like MacGyver, the television character from the 1980s who solved complex problems and escaped dangerous situations by using everyday objects and materials he found at hand," said Stilman, an assistant professor in the School of Interactive Computing at Georgia Tech. "We want to understand the basic cognitive processes that allow humans to take advantage of arbitrary objects in their environments as tools. We will achieve this by designing algorithms for robots that make tasks that are impossible for a robot alone possible for a robot with tools."

The research will build on Stilman's previous work on navigation among movable obstacles that enabled robots to autonomously recognize and move obstacles that were in the way of their getting from point A to point B.

"This project is challenging because there is a critical difference between moving objects out of the way and using objects to make a way," explained Stilman. "Researchers in the robot motion planning field have traditionally used computerized vision systems to locate objects in a cluttered environment to plan collision-free paths, but these systems have not provided any information about the objects' functions."

To create a robot capable of using objects in its environment to accomplish a task, Stilman plans to develop an algorithm that will allow a robot to identify an arbitrary object in a room, determine the object's potential function, and turn that object into a simple machine that can be used to complete an action. Actions could include using a chair to reach something high, bracing a ladder against a bookshelf, stacking boxes to climb over something, and building levers or bridges from random debris.

By providing the robot with basic knowledge of rigid body mechanics and simple machines, the robot should be able to autonomously determine the mechanical force properties of an object and construct motion plans for using the object to perform high-level tasks.

For example, exiting a burning room with a jammed door would require a robot to travel around any fire, use an object in the room to apply sufficient force to open the stuck door, and locate an object in the room that will support its weight while it moves to get out of the room.

Such skills could be extremely valuable in the future as robots work side-by-side with military personnel to accomplish challenging missions.

"The Navy prides itself on recruiting, training and deploying our country's most resourceful and intelligent men and women," said Paul Bello, director of the cognitive science program in the Office of Naval Research (ONR). "Now that robotic systems are becoming more pervasive as teammates for warfighters in military operations, we must ensure that they are both intelligent and resourceful. Professor Stilman's work on the 'MacGyver-bot' is the first of its kind, and is already beginning to deliver on the promise of mechanical teammates able to creatively perform in high-stakes situations."

To address the complexity of the human-like reasoning required for this type of scenario, Stilman is collaborating with researchers Pat Langley and Dongkyu Choi. Langley is the director of the Institute for the Study of Learning and Expertise (ISLE), and is recognized as a co-founder of the field of machine learning, where he championed both experimental studies of learning algorithms and their application to real-world problems. Choi is an assistant professor in the Department of Aerospace Engineering at the University of Kansas.

Langley and Choi will expand the cognitive architecture they developed, called ICARUS, which provides an infrastructure for modeling various human capabilities like perception, inference, performance and learning in robots.

"We believe a hybrid reasoning system that embeds our physics-based algorithms within a cognitive architecture will create a more general, efficient and structured control system for our robot that will accrue more benefits than if we used one approach alone," said Stilman.

After the researchers develop and optimize the hybrid reasoning system using computer simulations, they plan to test the software using Golem Krang, a humanoid robot designed and built in Stilman's laboratory to study whole-body robotic planning and control.

This research is sponsored by the Department of the Navy, Office of Naval Research, through grant number N00014-12-1-0143. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Office of Naval Research.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Georgia Institute of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, October 14, 2012

Android-based network built to study cyber disruptions and help secure hand-held devices

ScienceDaily (Oct. 2, 2012) — As part of ongoing research to help prevent and mitigate disruptions to computer networks on the Internet, researchers at Sandia National Laboratories in California have turned their attention to smartphones and other hand-held computing devices.

Sandia cyber researchers linked together 300,000 virtual hand-held computing devices running the Android operating system so they can study large networks of smartphones and find ways to make them more reliable and secure. Android dominates the smartphone industry and runs on a range of computing gadgets.

The work is expected to result in a software tool that will allow others in the cyber research community to model similar environments and study the behaviors of smartphone networks. Ultimately, the tool will enable the computing industry to better protect hand-held devices from malicious intent.

The project builds on the success of earlier work in which Sandia focused on virtual Linux and Windows desktop systems.

"Smartphones are now ubiquitous and used as general-purpose computing devices as much as desktop or laptop computers," said Sandia's David Fritz. "But even though they are easy targets, no one appears to be studying them at the scale we're attempting."

The Android project, dubbed MegaDroid, is expected to help researchers at Sandia and elsewhere who struggle to understand large scale networks. Soon, Sandia expects to complete a sophisticated demonstration of the MegaDroid project that could be presented to potential industry or government collaborators.

The virtual Android network at Sandia, said computer scientist John Floren, is carefully insulated from other networks at the Labs and the outside world, but can be built up into a realistic computing environment. That environment might include a full domain name service (DNS), an Internet relay chat (IRC) server, a web server and multiple subnets.

A key element of the Android project, Floren said, is a "spoof" Global Positioning System (GPS). He and his colleagues created simulated GPS data of a smartphone user in an urban environment, an important experiment since smartphones and such key features as Bluetooth and Wi-Fi capabilities are highly location-dependent and thus could easily be controlled and manipulated by rogue actors.

The researchers then fed that data into the GPS input of an Android virtual machine. Software on the virtual machine treats the location data as indistinguishable from real GPS data, which offers researchers a much richer and more accurate emulation environment from which to analyze and study what hackers can do to smartphone networks, Floren said.

This latest development by Sandia cyber researchers represents a significant steppingstone for those hoping to understand and limit the damage from network disruptions due to glitches in software or protocols, natural disasters, acts of terrorism, or other causes. These disruptions can cause significant economic and other losses for individual consumers, companies and governments.

"You can't defend against something you don't understand," Floren said. The larger the scale the better, he said, since more computer nodes offer more data for researchers to observe and study.

The research builds upon the Megatux project that started in 2009, in which Sandia scientists ran a million virtual Linux machines, and on a later project that focused on the Windows operating system, called MegaWin. Sandia researchers created those virtual networks at large scale using real Linux and Windows instances in virtual machines.

The main challenge in studying Android-based machines, the researchers say, is the sheer complexity of the software. Google, which developed the Android operating system, wrote some 14 million lines of code into the software, and the system runs on top of a Linux kernel, which more than doubles the amount of code.

"It's possible for something to go wrong on the scale of a big wireless network because of a coding mistake in an operating system or an application, and it's very hard to diagnose and fix," said Fritz. "You can't possibly read through 15 million lines of code and understand every possible interaction between all these devices and the network."

Much of Sandia's work on virtual computing environments will soon be available for other cyber researchers via open source. Floren and Fritz believe Sandia should continue to work on tools that industry leaders and developers can use to better diagnose and fix problems in computer networks.

"Tools are only useful if they're used," said Fritz.

MegaDroid primarily will be useful as a tool to ferret out problems that would manifest themselves when large numbers of smartphones interact, said Keith Vanderveen, manager of Sandia's Scalable and Secure Systems Research department.

"You could also extend the technology to other platforms besides Android," said Vanderveen. "Apple's iOS, for instance, could take advantage of our body of knowledge and the toolkit we're developing." He said Sandia also plans to use MegaDroid to explore issues of data protection and data leakage, which he said concern government agencies such as the departments of Defense and Homeland Security.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Sandia National Laboratories.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Computer program can identify rough sketches

ScienceDaily (Sep. 13, 2012) — First they took over chess. Then Jeopardy. Soon, computers could make the ideal partner in a game of Draw Something (or its forebear, Pictionary).

Researchers from Brown University and the Technical University of Berlin have developed a computer program that can recognize sketches as they're drawn in real time. It's the first computer application that enables "semantic understanding" of abstract sketches, the researchers say. The advance could clear the way for vastly improved sketch-based interface and search applications.

The research behind the program was presented last month at SIGGRAPH, the world's premier computer graphics conference. The paper is now available online (http://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/), together with a video, a library of sample sketches, and other materials.

Computers are already pretty good at matching sketches to objects as long as the sketches are accurate representations. For example, applications have been developed that can match police sketches to actual faces in mug shots. But iconic or abstract sketches -- the kind that most people are able to easily produce -- are another matter entirely.

For example, if you were asked to sketch a rabbit, you might draw a cartoony-looking thing with big ears, buckteeth, and a cotton tail. Another person probably wouldn't have much trouble recognizing your funny bunny as a rabbit -- despite the fact that it doesn't look all that much like a real rabbit.

"It might be that we only recognize it as a rabbit because we all grew up that way," said James Hays, assistant professor of computer science at Brown, who developed the new program with Mathias Eitz and Marc Alexa from the Technical University in Berlin. "Whoever got the ball rolling on caricaturing rabbits like that, that's just how we all draw them now."

Getting a computer to understand what we've come to understand through years of cartoons and coloring books is a monumentally difficult task. The key to making this new program work, Hays says, is a large database of sketches that could be used to teach a computer how humans sketch objects. "This is really the first time anybody has examined a large database of actual sketches," Hays said.

To put the database together, the researchers first came up with a list of everyday objects that people might be inclined to sketch. "We looked at an existing computer vision dataset called LabelMe, which has a lot of annotated photographs," Hays said. "We looked at the label frequency and we got the most popular objects in photographs. Then we added other things of interest that we thought might occur in sketches, like rainbows for example."

They ended up with a set of 250 object categories. Then the researchers used Mechanical Turk, a crowdsourcing marketplace run by Amazon, to hire people to sketch objects from each category -- 20,000 sketches in all. Those data were then fed into existing recognition and machine learning algorithms to teach the program which sketches belong to which categories. From there, the team developed an interface where users input new sketches, and the computer tries to identify them in real time, as quickly as the user draws them.

As it is now, the program successfully identifies sketches with around 56-percent accuracy, as long as the object is included in one of the 250 categories. That's not bad, considering that when the researchers asked actual humans to identify sketches in the database, they managed about 73-percent accuracy. "The gap between human and computational performance is not so big, not as big certainly as it is in other computer vision problems," Hays said.

The program isn't ready to rule Pictionary just yet, mainly because of its limited 250-category vocabulary. But expanding it to include more categories is a possibility, Hays says. One way to do that might be to turn the program into a game and collect the data that players input. The team has already made a free iPhone/iPad app that could be gamified.

"The game could ask you to sketch something and if another person is able to successfully recognize it, then we can say that must have been a decent enough sketch," he said. "You could collect all sorts of training data that way."

And that kind of crowdsourced data has been key to the project so far.

"It was the data gathering that had been holding this back, not the digital representation or the machine learning; those have been around for a decade," Hays said. "There's just no way to learn to recognize say, sketches of lions, based on just a clever algorithm. The algorithm really needs to see close to 100 instances of how people draw lions, and then it becomes possible to tell lions from potted plants."

Ultimately a program like this one could end up being much more than just fun and games. It could be used to develop better sketch-based interface and search applications. Despite the ubiquity of touch screens, sketch-based search still isn't widely used, but that's probably because it simply hasn't worked very well, Hays says.

A better sketch-based interface might improve computer accessibility. "Directly searching for some visual shape is probably easier in some domains," Hays said. "It avoids all language issues; that's certainly one thing."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Brown University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here