Google Search

Monday, December 31, 2012

On-demand synaptic electronics: Circuits that learn and forget

Dec. 20, 2012 — Researchers in Japan and the US propose a nanoionic device with a range of neuromorphic and electrical multifunctions that may allow the fabrication of on-demand configurable circuits, analog memories and digital-neural fused networks in one device architecture.

Synaptic devices that mimic the learning and memory processes in living organisms are attracting avid interest as an alternative to standard computing elements that may help extend Moore's law beyond current physical limits.

However so far artificial synaptic systems have been hampered by complex fabrication requirements and limitations in the learning and memory functions they mimic. Now Rui Yang, Kazuya Terabe and colleagues at the National Institute for Materials Science in Japan and the University of California, Los Angeles, in the US have developed two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions.

In its initial pristine condition the system has very high resistance values. Sweeping both negative and positive voltages across the system decreases this resistance nonlinearly, but it soon returns to its original state indicating a volatile state. Applying either positive or negative pulses at the top electrode introduces a soft-breakdown, after which sweeping both negative and positive voltages leads to non-volatile states that exhibit bipolar resistance and rectification for longer periods of time.

The researchers draw similarities between the device properties -- volatile and non-volatile states and the current fading process following positive voltage pulses -- with models for neural behaviour -- that is, short- and long-term memory and forgetting processes. They explain the behaviour as the result of oxygen ions migrating within the device in response to the voltage sweeps. Accumulation of the oxygen ions at the electrode leads to Schottky-like potential barriers and the resulting changes in resistance and rectifying characteristics. The stable bipolar switching behaviour at the Pt/WO3-x interface is attributed to the formation of the electric conductive filament and oxygen absorbability of the Pt electrode.

As the researchers conclude, "These capabilities open a new avenue for circuits, analog memories, and artificially fused digital neural networks using on-demand programming by input pulse polarity, magnitude, and repetition history."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by International Center for Materials Nanoarchitectonics (MANA), via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Rui Yang, Kazuya Terabe, Guangqiang Liu, Tohru Tsuruoka, Tsuyoshi Hasegawa, James K. Gimzewski, Masakazu Aono. On-Demand Nanodevice with Electrical and Neuromorphic Multifunction Realized by Local Ion Migration. ACS Nano, 2012; 6 (11): 9515 DOI: 10.1021/nn302510e

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

IT building blocks for the ordinary person

Nov. 21, 2012 — Would you like to create your own tourist guide? Or put together telecom services that give you better control of the everyday functions on your phone?

We seem to be drowning in 'intelligent things' and IT services. In our smart home, we can use various applications to control the front door, TV, washing machine, vacuum, heating and blinds. Other apps enable us to find out what time the bus is leaving, or book a table at a restaurant. On the medical side, there are sensors that can monitor your heart rate, intelligent pill boxes that remember when you should take your medicine, and applications to notify relatives if an elderly person doesn't get out of bed at their normal time.

But what if you go on holiday, and want to be able to water the plants in your garden, or turn the heating on or off in a certain room when the weather changes? Do you want to keep checking on yr.no in your hotel room, or use various different apps to control your house remotely? Wouldn't it be better if you could programme your house before you set off, and then enjoy your holiday without worrying?

Overwhelmed?

'We're now seeing many intelligent devices affecting our lives, and we are expecting to see more,' says Jacqueline Floch at SINTEF ICT. 'The question is whether people out there will be able to function independently. Some will manage to acquire the right technology skills and tailor IT services to their own needs, while others will feel overwhelmed by the huge choice'.

The researchers' idea is therefore to create a tool composed of different building blocks, so that people can select, combine and put together the services they need. 'Since most people aren't qualified programmers or software developers, we have to provide them with a new user interface and a tool that they can understand,' says Floch. Working with companies

For the last four years, the ICT researchers -- supported by the Research Council of Norway and the VERDIKT programme -- have been working with the three companies Tellu, Gintel and Wireless Trondheim on various aspects of the project. The result is the 'UbiSys' framework. Tellu currently develops software systems for the mobile market, while Gintel creates software for telecom operators and service providers, and Wireless Trondheim offers a network on which new IT services can be operated experimentally.

Easier tracking

The researchers have used the services offered by these companies as their starting point. For example, Tellu in Oslo markets the SmartTrack service platform. This allows different tracking services to connect and work together to monitor mobile units, whether these are devices or people.

It is possible to track these units, irrespective of their situation and condition, such as their location, movement or battery level. For example, users in the transport industry can track containers, while a smelting plant can keep track of its tools.

'The SmartTrack interface supports the definition of rules such as "if a person has a fall, notify a relative" or "if a tool is not indoors by 20:00, send an alarm to the duty officer." This interface is complex, and requires programming expertise. We have simplified this, allowing Tellus's customers to create their own rules,' says Floch. By combining SmartTrack with 'UbiSys', she thinks that 'the man in the street' will be able to use the service.

Telecom services

Gintel develops systems that enable telecom operators and service providers to tailor services to their corporate customers. These services might be managing incoming calls or conference services. Gintel currently offers its operators the 'Easy Designer' framework which allows users to modify existing services and quickly create new solutions. No software development expertise is needed to use Easy Designer, but users need to be expert in the communication and training domains.

In response to requests from its clients, Gintel is now moving towards the end users of telecom services, i.e. telephone users. The company has therefore started using 'UbiSys', enabling end users to put together telephone services themselves. The result is 'EasyDroid'.

'What we have done,' says Jacqueline Floch, 'is give people a way of controlling the everyday functions on their phones. You can link incoming calls to your calendar and location. If you're in a meeting or at a concert, you can set the phone so that it automatically diverts calls. You can also choose to receive calls from 'important' people, send a text when the meeting is over, or forward the call to someone else. There are many options. The point is that you are in control and can put things together in any way you like.'

City Explorer

In order to demonstrate to a broader audience how they envisage these tools made of different building blocks, the SINTEF researchers have developed the City Explorer application. This is an Android app that enables users to create their own city guide.

The app lets people create or edit places and itineraries in a city, and the new prototype includes three examples for Trondheim: one for tourists, one for people interested in architecture, and one for visitors interested in sculptures.

'Again, the important thing is that people can put things together just as they want,' says Jacqueline Floch. 'We are interested in adding to existing functions, so that the user can create their own "menu list."

For example, you can set your phone to go to silent mode in specific locations. Or you can get your phone to automatically obtain the bus timetable for the next stop on a given itinerary, and remind you when you are due to be arriving at that stop. Some people prefer to do this manually, while others are easily distracted and forget to do it. People are different, and that's why we want to give them the option of controlling everyday things themselves.'

The research group now needs funding for further work, which will focus on the elderly and AAL -- Ambient Assisted Living, or welfare technology.

Fact box: UbiSys -- framework for end user service development. UbiSys is made up of three tools: UbiCompPro -- a tool for professionals, which they use to develop 'building blocks'. UbiComposer -- an editor for end users, which they use to put together building blocks. UbiCompRun -- middleware (runtime platform), used to execute services put together by users.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by SINTEF. The original article was written by Åse Dragland.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, December 30, 2012

Information and communication technologies allow electrical consumption to be reduced by one third

Nov. 12, 2012 — Information and Communication Technologies (ICTs) may allow a thirty percent reduction in electrical consumption in cities. This is what is demonstrated by a European research project that Universidad Carlos III of Madrid (UC3M) has participated in. The results were presented after analysis showed how to optimize the use of residential consumption and generation infrastructures.

The scientists and technologists who are participating in the ENERsip project have formed a consortium of ten partners from five European countries led by the Spanish company Tecnalia; they have designed, developed, and validated an ICT platform that allows residential electrical consumption to be reduced by 30 percent, while also integrating micro-generating installations using renewable energy, such as photovoltaic solar panels installed on the roofs of homes.

The key to obtaining these results lies in two strategies: reducing the consumption of electricity in homes (around 15 to 20 percent) and adjusting the consumption and generation of electricity in districts (approximately 15 to 20 percent). First, the system "gives the users information regarding their consumption, allowing them to identify the appliances that use the most energy; it then suggests possible solutions, attempting to modify certain behaviors and fomenting good practices that allow consumers to reduce their electricity bill," explains Professor José Ignacio Moreno, of the UC3M's Department of Telematic Engineering. In this way, the ENERsip platform allows appliances to be monitored by networks of sensors and actuators so that they can be controlled wirelessly by using web applications.

In addition, the system they have designed carries out automatic actions that allow the consumption in homes within a district to be adjusted as much as possible so that they use renewable energy generated by sources from within the same district, thus reducing energy flows and, consequently, energy losses and costs. "This type of action falls within what is know as electricity demand management," indicates another of the UC3M researchers, Gregorio López. For example, he comments, the temperature could be raised by a few degrees in the summer (or lowered in winter) in hundreds of thousands of homes during the periods of lowest production of renewable energy in a district, or the programmed running of certain appliances (dishwashers, washing machines) can be moved to a time period when renewable energy production its at its peak. "Of course," López points out, "those households would have agreed in advance to participate in this type of program in exchange for certain incentives, and pre-established levels of comfort would never be compromised."

Intelligent and efficient electrical grids

The conclusion of this project, which falls within what is known as Smart Grid framework, is that, thanks to the automatic actions that using ICTs permits, savings in electrical consumption of up to 30 percent can be achieved. To obtain these results, the researchers tested the system in various computer simulations; they validated the platform in a pilot project carried out in three buildings located in different geographic points of Israel. Moreover, these figures are in the same range of those which appear in reports on other projects, such as SMART 2020, for example, which estimates that the application of ICTs to improve energy efficiency could result in a savings of approximately 600 billion Euros globally in the year 2020.

A few basic ICT installations would be sufficient to make the ENERsip platform work. Specifically, the platform would require networks with sensors and actuators for the consumption and micro-generation infrastructures, an Internet connection and a web application that would allow access from any device connected to the Web (although the ENERsip project itself also uses a dedicated core communications infrastructure that offers certain advantages). "It could be implemented from any home equipped with the typical consumer infrastructure or consumer and micro-generation infrastructure," José Ignacio Moreno states. The team he heads at UC3M has been in charge of the formal design and modeling of the communications architecture of the ENERsip platform, as well as the software simulations to evaluate the performance of that architecture. In addition, he has participated in the design and definition of the platform's integration and validation phases and scenarios; he has reported on the progress of the research through technical articles presented at key communication conferences, such as INFOCOM 2011 and ICC 2012.

The ENERsip consortium, which is formed by ten partners from five European countries, is led by the Spanish company Tecnalia and includes the participation of various leading companies in the field, such as Amplia Soluciones (Spain), Honeywell (Czech Republic), IEC (Israel Electric Corporation, Israel), ISA (Intelligent Sensing Anywhere, Portugal), ISASTUR (Ingeniería y Suministros de Asturias S.A., Spain), MSIL (Motorola Solutions Israel Ltd, Israel), as well as research centers such the ISR-UC (Institute of Systems and Robotics-University of Coimbra, Portugal), UC3M (Universidad Carlos III of Madrid, Spain) and VITO (Vlaamse Instelling voor Technologisch Onderzoek, Belgium).

Project Web: www.enersip-project.eu

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Universidad Carlos III de Madrid - Oficina de Información Científica.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, November 2, 2012

A Travelling Salesman Problem special case: 30-year-old problem solved

ScienceDaily (Sep. 13, 2012) — The science of computational complexity aims to solve the TSP -- the Travelling Salesman Problem -- when the time required to find an optimal solution is vital for practical solutions to modern-day problems such as air traffic control and delivery of fresh food. Warwick Business School's Dr Vladimir Deineko and colleagues have now solved a 30-year-old TSP special case problem.

The Travelling Salesman Problem, or TSP, was first defined around 150 years ago. The problem then was to find the shortest possible route for salesmen to visit each of their customers once and finish back where they started. In the 21st century, this same problem now applies to a multitude of activities -- delivering fresh stock to supermarkets, supplying manufacturing lines, air traffic control, and even DNA sequencing. Complex and sophisticated computer programmes using optimisation -- where algorithms produce the best possible result from multiple choices -- now form the basis of solutions to these modern-day problems. The time required to find an optimal solution is vital for practical application of the TSP. How long can lorry drivers wait for their route to be finalised when the salads they hope to deliver will only be fresh for another 24 hours? How long can air traffic control keep an airliner flying in circles around Heathrow Airport?

The theoretical background behind these types of questions is studied in the theory of computational complexity. The TSP is of paramount significance for this branch of knowledge. Even a small incremental step in understanding the nature of this problem is of interest and benefit to the scientific community.

Associate Professor Dr Vladimir Deineko of Warwick Business School, together with Eranda Cela (University of Technology Graz, Austria) and Gerhard Woeginger (Eindhoven University, the Netherlands) have addressed a special case of the TSP, or open problem as it is termed, first identified 30 years ago. Dr Deineko's and his colleagues' work gives a solution of theoretical significance for computer science and operational research.

Dr Deineko comments, "The TSP has served as a benchmark problem for all new and significant approaches developed in optimisation. It belongs to the set of so called NP-hard problems. There are obviously some special cases when the TSP can be solved efficiently. The simplest possible case is when the cities are the points on a straight line and the distances are as-the-crow-flies-distances. One can easily get the shortest route for visiting all the points in this case. Probably the next simplest case would be the case when the cities are the points on two perpendicular lines and the distances are again as-the-crow-flies-distances (so-called X-and-Y-axes TSP).

"Despite its apparent simplicity, this special case problem has been circulating in the scientific community for around 30 years. Until our work, it was not known whether an algorithm existed which would guarantee finding an optimal solution to any instance of these problems within a reasonable amount of time. We have now proved that the X-and-Y axes TSP can be easily solved in a number of steps proportional to the square of the number of cities."

Dean of WBS, Professor Mark Taylor, comments, "I congratulate Dr Deineko and his colleagues in advancing our knowledge of this enormously complex subject. They have produced cutting-edge research which is not only of great importance to the scientific community, but ultimately also of great relevance to all of us who depend on modern technology as we go about our daily lives."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Warwick, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Eranda Çela, Vladimir Deineko, Gerhard J. Woeginger. The x-and-y-axes travelling salesman problem. European Journal of Operational Research, 2012; 223 (2): 333 DOI: 10.1016/j.ejor.2012.06.036

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, November 1, 2012

Making Sudoku puzzles less puzzling

ScienceDaily (Oct. 11, 2012) — For anyone who has ever struggled while attempting to solve a Sudoku puzzle, University of Notre Dame complex networks researcher Zoltan Toroczkai and Notre Dame postdoctoral researcher Maria Ercsey-Ravasz are coming to the rescue. They can not only explain why some Sudoku puzzles are harder than others, they have also developed a mathematical algorithm that solves Sudoku puzzles very quickly, without any guessing or backtracking.

Toroczkai and Ercsey-Ravasz, of Romania's Babes-Bolyai University, began studying Sudoku as part of their research into the theory of optimization and computational complexity. They note that most Sudoku enthusiasts use what is known as a "brute force" system to solve problems, combined with a good deal of guessing. Brute force systems essentially deploy all possible combinations of numbers in a Sudoku puzzle until the correct answer is found. While the method is successful, it is also time consuming.

Instead, Toroczkai and Ercsey-Ravasz have proposed a universal analog algorithm that is completely deterministic (no guessing or exhaustive searching) and always arrives at the correct solution to a problem, and does so much more quickly.

The researchers also discovered that the time it took to solve a problem with their analog algorithm correlated with the difficulty of the problem as rated by human solvers. This led them to develop a ranking scale for problem or puzzle difficulty. The scale runs from 1 through 4, and it matches up nicely with the "easy" through "hard" to "ultra-hard" classification currently applied to Sudoku puzzles. A puzzle with a rating of 2 takes, on average, 10 times as long to solve than one with rating of 1. According to this system, the hardest known puzzle so far has a rating of 3.6, and it is not known if there are even harder puzzles out there.

"I had not been interested in Sudoku until we started working on the much more general class of Boolean satisfiability problems," Toroczkai said. "Since Sudoku is a part of this class, it seemed like a good testbed for our solver, so I familiarized myself with it. To me, and to a number of researchers studying such problems, a fascinating question is how far can us humans go in solving Sudoku puzzles deterministically, without backtracking -- that is without making a choice at random, then seeing where that leads to and if it fails, restarting. Our analog solver is deterministic -- there are no random choices or backtracks made during the dynamics."

Toroczkai and Ercsey-Ravasz believe their analog algorithm potentially can be applied to a wide variety of problems in industry, computer science and computational biology.

The research experience has also made Toroczkai a devotee of Sudoku puzzles.

"Both my wife and I have several Sudoku apps on our iPhones, and we must have played thousands of times, racing to get the shortest completion times on all levels," he said. "She often sees combinations of patterns that I completely miss. I have to deduce them. Without paper and pencil to jot down possibilities, it becomes impossible for me to solve many of the puzzles that our solver categorizes as hard or ultra-hard."

Toroczkai and Ercsey-Ravasz's methodology was first published in the journal Nature Physics, and its application to Sudoku, appears in the Oct. 11 edition of the journal Nature Scientific Reports.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Notre Dame. The original article was written by William G. Gilroy.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal References:

Mária Ercsey-Ravasz, Zoltán Toroczkai. Optimization hardness as transient chaos in an analog approach to constraint satisfaction. Nature Physics, 2011; 7 (12): 966 DOI: 10.1038/nphys2105Mária Ercsey-Ravasz, Zoltán Toroczkai. The Chaos Within Sudoku. Scientific Reports, 2012; 2 DOI: 10.1038/srep00725

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, October 31, 2012

Fast algorithm extracts and compares document meaning

ScienceDaily (Sep. 25, 2012) — A computer program could compare two documents and work spot the differences in their meaning using a fast semantic algorithm developed by information scientists in Poland.

Writing in the International Journal of Intelligent Information and Database Systems, Andrzej Sieminski of the Technical University of Wroclaw, explains that extracting meaning and calculating the level of semantic similarity between two pieces of texts is a very difficult task, without human intervention. There have been various methods proposed by computer scientists for addressing this problem, but they all suffer from computational complexity, he says.

Sieminski has now attempted to reduce this complexity by merging a computationally efficient statistical approach to text analysis with a semantic component. Tests of the algorithm on English and Polish tests work well. The test set consisted of 4,890 English sentences with 142,116 words and 11,760 Polish sentences with 184,524 words scraped from online services via their newsfeeds over the course of five days. Sieminski points out that the complexity of the algorithm used on the Polish documents required an additional level of sophistication in terms of computing word means and disambiguation.

Traditional "manual" methods of indexing simply cannot now cope with the vast quantities of information generated on a daily basis by humanity as a whole in scientific research more specifically. The new algorithm once optimised could radically change the way in which we make archived documents searchable and allow knowledge to be extracted far more readily than is possible with standard indexing and search tools.

The approach also circumvents three critical problems faced by most users of conventional search engines: First, the lack of familiarity with the advanced search options of search engines, with a semantic algorithm advanced options become almost unnecessary. Secondly, the rigid nature of the options that are unable to catch the subtle nuance of user information needs, again a tool that understands the meaning of a search and the meaning of the results it offers avoids this problem. Finally, the unwillingness or unacceptably long time necessary to type a long query, semantically aware search will require only simply input.

Sieminski points out that the key virtue of the research is the idea of using the statistical similarity measures to assess semantic similarity. He explains that semantic similarity of words could be inferred from the WordNet database. He proposes using this database only during text indexing. "Indexing is done only once so the inevitably long processing time is not an issue," he says. "From that point on we use only statistical algorithms, which are fast and high performance."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Andrzej Sieminski. Fast algorithm for assessing semantic similarity of texts. Int. J. Intelligent Information and Database Systems, 2012, 6, 495-512

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, October 30, 2012

Space travel with a new language in tow

ScienceDaily (Oct. 1, 2012) — September 28, for the first time ever, SES, the Luxembourg-based satellite operator, has allowed an Ariane 5 rocket to transport a TV satellite into space, which is made by Astrium and runs entirely on latest generation software. Every single one of the programs used to operate the satellite was written in the new satellite language SPELL. The acronym stands for "Satellite Procedure Execution Language & Library."

What we are talking about here is a new standard, which will help the many different programming languages that were previously used to operate satellites and their subsystems to be unified under one roof. The University of Luxembourg's Interdisciplinary Centre for Security, Reliability and Trust (SnT) has contributed substantially to SPELL's being adopted in the operations of Astrium satellites. To this end, SnT scientists took an existing mathematical tool and refined it getting it ready for practical application, with whose help the procedures written in different native languages can now be translated into SPELL using a fully automated process.

SES is one of the world's biggest satellite operators with a vast fleet of satellites in orbit. The satellites and their technical components are produced by different manufacturers who each use their own programming language. "Because of the complete and utter lack of common standards up until now, we used to have to make a big production out of operation and maintenance of the machines," explains Martin Halliwell, Chief Technology Officer at SES. "Our operators were working with a number of different programming languages to help us control our SES fleet through space." Which is problematic as the machines don't easily forgive programming errors. Says Halliwell: "If a single error is made, it may result in our satellite getting lost in space. Which, for us, literally means incurring millions in losses."

Which is why SES decided a while ago now to develop the open-source software, SPELL. SPELL allows for the careful execution of every imaginable navigational procedure from any given ground control system for all potential satellites in the fleet. In other words, maximum flexibility with maximum security. "There is, however, a catch to the whole thing," concedes Dr. Frank Hermann, SnT scientist. "All the various control procedures that exist in different programming languages and that are being used must be converted over to SPELL. If that does not happen automatically and is one hundred percent error-free, it quickly turns into very resource-intensive and error-prone undertaking."

Together, SnT's Frank Hermann and his collegues, in close collaboration with SES automation specialists, have tackled the problem head-on using a methode known as triple graph transformation to automatically translate the programming languages employed by the new satellite's subsystems into the common language SPELL. According to Hermann, " triple graph transformation is a mathematical tool that has been the focus of active research since the 1990s. Along with other mathematical tools, it represents the ideal instrument for combining different programming languages under SPELL."

What's special about the new translation process is that it does not require any source code programming. "We are working with a visual development setting, which records translation rules in a graphic user interface," explains Hermann. These rules are automatically executed by specialized transformation tools. Quality assurance happens through consistency checks, which are automated as well. "Their efficacy has been documented through multiple formal mathematical proofs," says Hermann. If the translation runs smoothly, every piece of information from the original language is first converted into a graph. "This creates a network made up of many different nodes on the graphic interface," explains Hermann. The network is then read and translated into target graphs for the target language SPELL. "Every single bit of information in the original language has a corresponding SPELL counterpart."

The SES validation teams have confirmed that the translation is highly precise. "This was a prerequisite for being able to unanimously program our new satellite's systems using SPELL," says Martin Halliwell. SnT's Vice-Director, Prof. Thomas Engel, is very pleased with the SnT scientists' performance specifically and with the SES/SnT collaborative in general: "The new satellite and SPELL will now have to prove themselves in space. If everything runs smoothly -- which we are quite certain that it will -- our basic science research will have made an important contribution to increasing SES's performance and to making Luxembourg more competitive in this area."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Université du Luxembourg, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, October 29, 2012

Computers get a better way to detect threats

ScienceDaily (Sep. 20, 2012) — UT Dallas computer scientists have developed a technique to automatically allow one computer in a virtual network to monitor another for intrusions, viruses or anything else that could cause a computer to malfunction.

The technique has been dubbed "space travel" because it sends computer data to a world outside its home, and bridges the gap between computer hardware and software systems.

"Space travel might change the daily practice for many services offered virtually for cloud providers and data centers today, and as this technology becomes more popular in a few years, for the user at home on their desktop," said Dr. Zhiquian Lin, the research team's leader and an assistant professor of computer science in the Erik Jonsson School of Engineering and Computer Science.

As cloud computing is becoming more popular, new techniques to protect the systems must be developed. Since this type of computing is Internet-based, skilled computer specialists can control the main part of the system virtually -- using software to emulate hardware.

Lin and his team programmed space travel to use existing code to gather information in a computer's memory and automatically transfer it to a secure virtual machine -- one that is isolated and protected from outside interference.

"You have an exact copy of the operating system of the computer inside the secure virtual machine that a hacker can't compromise," Lin said. "Using this machine, then the user or antivirus software can understand what's happening with the space traveled computer setting off red flags if there is any intrusion.

Previously, software developer had to manually write such tools.

"With our technique, the tools already being used on the computer become part of the defense process," he said.

The gap between virtualized computer hardware and software operating on top of it was first characterized by Drs. Peter Chen and Brian Noble, faculty members from the University of Michigan.

"The ability to leverage existing code goes a long way in solving the gap problem inherent to many types of virtual machine services," said Chen, Arthur F. Thurnau Professor of Electrical Engineering and Computer Science, who first proposed the gap in 2001. "Fu and Lin have developed an interesting way to take existing code from a trusted system and automatically use it to detect intrusions."

Lin said the space travel technique will help the FBI understand what is happening inside a suspect's computer even if they are physically miles away, instead of having to buy expensive software.

Space travel was presented at the most recent IEEE Symposium on Security and Privacy. Lin developed this with Yangchun Fu, a research assistant in computer science.

"This is the top conference in cybersecurity, said Bhavani Thuraisingham, executive director of the UT Dallas Cyber Security Research and Education Center and a Louis A. Beecherl Jr. Distinguished Professor in the Jonsson School. "It is a major breakthrough that virtual developers no longer need to write any code to bridge the gap by using the technology invented by Dr. Lin and Mr. Fu. This research has given us tremendous visibility among the cybersecurity research community around the world."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Texas, Dallas.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Turn your dreams into music

ScienceDaily (Sep. 10, 2012) — Computer scientists in Finland have developed a method that automatically composes music out of sleep measurements.

Developed under Hannu Toivonen, Professor of Computer Science at the University of Helsinki, Finland, the software automatically composes synthetic music using data related to a person's own sleep as input.

The composition program is the work of Aurora Tulilaulu, a student of Professor Toivonen.

"The software composes a unique piece based on the stages of sleep, movement, heart rate and breathing. It compresses a night's sleep into a couple of minutes," she describes.

"We are developing a novel way of illustrating, or in fact experiencing, data. Music can, for example, arouse a variety of feelings to describe the properties of the data. Sleep analysis is a natural first application," Hannu Toivonen justifies the choice of the research topic.

The project utilises a sensitive force sensor placed under the mattress.

"Heartbeats and respiratory rhythm are extracted from the sensor's measurement signal, and the stages of sleep are deducted from them," says Joonas Paalasmaa, a postgraduate student in the Department of Computer Science. He designed the sleep stage software at Beddit, a company that provides services in the field.

The composition service is available online at http://sleepmusicalization.net/. The users of Beddit's service can have music composed from their own sleep, while others can listen to the compositions. The online service is the work of the fourth research team member, Mikko Waris.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Helsingin yliopisto (University of Helsinki), via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, October 28, 2012

Engineers built a supercomputer from 64 Raspberry Pi computers and Lego

ScienceDaily (Sep. 11, 2012) — Computational Engineers at the University of Southampton have built a supercomputer from 64 Raspberry Pi computers and Lego.

The team, led by Professor Simon Cox, consisted of Richard Boardman, Andy Everett, Steven Johnston, Gereon Kaiping, Neil O'Brien, Mark Scott and Oz Parchment, along with Professor Cox's son James Cox (aged 6) who provided specialist support on Lego and system testing.

Professor Cox comments: "As soon as we were able to source sufficient Raspberry Pi computers we wanted to see if it was possible to link them together into a supercomputer. We installed and built all of the necessary software on the Pi starting from a standard Debian Wheezy system image and we have published a guide so you can build your own supercomputer."

The racking was built using Lego with a design developed by Simon and James, who has also been testing the Raspberry Pi by programming it using free computer programming software Python and Scratch over the summer. The machine, named "Iridis-Pi" after the University's Iridis supercomputer, runs off a single 13 Amp mains socket and uses MPI (Message Passing Interface) to communicate between nodes using Ethernet. The whole system cost under £2,500 (excluding switches) and has a total of 64 processors and 1Tb of memory (16Gb SD cards for each Raspberry Pi). Professor Cox uses the free plug-in 'Python Tools for Visual Studio' to develop code for the Raspberry Pi.

Professor Cox adds: "The first test we ran -- well obviously we calculated Pi on the Raspberry Pi using MPI, which is a well-known first test for any new supercomputer."

"The team wants to see this low-cost system as a starting point to inspire and enable students to apply high-performance computing and data handling to tackle complex engineering and scientific challenges as part of our on-going outreach activities."

James Cox (aged 6) says: "The Raspberry Pi is great fun and it is amazing that I can hold it in my hand and write computer programs or play games on it."

If you want to build a Raspberry Pi Supercomputer yourself see: http://www.soton.ac.uk/~sjc/raspberrypi

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Southampton, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, October 27, 2012

Computers match humans in understanding art

ScienceDaily (Sep. 25, 2012) — Understanding and evaluating art has widely been considered as a task meant for humans, until now. Computer scientists Lior Shamir and Jane Tarakhovsky of Lawrence Technological University in Michigan tackled the question "can machines understand art?" The results were very surprising. In fact, an algorithm has been developed that demonstrates computers are able to "understand" art in a fashion very similar to how art historians perform their analysis, mimicking the perception of expert art critiques.

In the experiment, published in the recent issue of ACM Journal on Computing and Cultural Heritage, the researchers used approximately 1,000 paintings of 34 well-known artists, and let the computer algorithm analyze the similarity between them based solely on the visual content of the paintings, and without any human guidance. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.

The analysis showed that the computer was clearly able to identify the differences between classical realism and modern artistic styles, and automatically separated the painters into two groups, 18 classical painters and 16 modern painters. Inside these two broad groups the computer identified sub-groups of painters that were part of the same artistic movements. For instance, the computer automatically placed the High Renaissance artists Raphael, Leonardo Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were also clustered together by the algorithm, showing that the computer automatically identified that these painters share similar artistic styles.

The automatic computer analysis is in agreement with the view of art historians, who associate these three painters with the Baroque artistic movement. Similarly, the computer algorithm deduced that Gauguin and Cézanne, both considered post-impressionists, have similar artistic styles, and also identified similarities between the styles of Salvador Dali, Max Ernst, and Giorgio de Chirico, all are considered by art historians to be part of the surrealism school of art. Overall, the computer automatically produced an analysis that is in large agreement with the influential links between painters and artistic movements as defined by art historians and critiques.

While the average non-expert can normally make the broad differentiation between modern art and classical realism, they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism. The experiment showed that machines can outperform untrained humans in the analysis of fine art.

The experiment was performed by computing from each painting 4,027 numerical image context descriptors -- numbers that reflect the content of the image such as texture, color and shapes in a quantitative fashion. This allows the computer to reflect very many aspects of the visual content, and use pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles and then quantify these similarities.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Lawrence Technological University, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Lior Shamir, Jane A. Tarakhovsky. Computer analysis of art. Journal on Computing and Cultural Heritage, 2012; 5 (2): 1 DOI: 10.1145/2307723.2307726

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Friday, October 26, 2012

Education: Get with the computer program

ScienceDaily (Oct. 5, 2012) — From email to Twitter, blogs to word processors, computer programs provide countless communications opportunities. While social applications have dominated the development of the participatory web for users and programmers alike, this era of Web 2.0 is applicable to more than just networking opportunities: it impacts education.

The integration of increasingly sophisticated information and communication tools (ICTs) is sweeping university classrooms. Understanding how learners and instructors perceive the effectiveness of these tools in the classroom is critical to the success or failure of their integration higher education settings. A new study led by Concordia University shows that when it comes to pedagogy, students prefer an engaging lecture rather than a targeted tweet.

Twelve universities across Quebec recently signed up to be a part of the first cross-provincial study of perceptions of ICT integration and course effectiveness on higher learning. This represented the first pan-provincial study to assess how professors are making the leap from lectures to LinkedIn -- and whether students are up for the change to the traditional educational model.

At the forefront of this study was Concordia's own Vivek Venkatesh. As associate dean of academic programs and development within the School of Graduate Studies, he has a particular interest in how education is evolving within post-secondary institutions. To conduct the study, Venkatesh partnered with Magda Fusaro from UQAM's Department of Management and Technology. Together, they conducted a pilot project at UQAM before rolling the project out to universities across the province.

"We hit the ground running and received an overwhelmingly positive response with 15,020 students and 2,640 instructors responding to our electronic questionnaires in February and March of 2011," recalls Venkatesh. The 120-item surveys gauged course structure preferences, perceptions of the usefulness of teaching methods, and the level of technology knowledge of both students and teachers.

The surprising results showed that students were more appreciative of the literally "old school" approach of lectures and were less enthusiastic than teachers about using ICTs in classes. Instructors were more fluent with the use of emails than with social media, while the opposite was true for students.

"Our analysis showed that teachers think that their students feel more positive about their classroom learning experience if there are more interactive, discussion-oriented activities. In reality, engaging and stimulating lectures, regardless of how technologies are used, are what really predict students' appreciation of a given university course," explains Fusaro.

The researchers hope these results will have a broad impact, especially in terms of curriculum design and professional development. For Venkatesh, "this project represents a true success story of collaboration across Québec universities that could definitely have an effect outside the province." Indeed, the large number of participants involved means this research is applicable to populations of learners across North America and Europe with similar educational and information technology infrastructures. An electronic revolution could soon sweep post-secondary classrooms around the world, thanks to this brand new research from Quebec.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Concordia University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Kamran Shaikh, Vivek Venkatesh, Tieja Thomas, Kathryn Urbaniak, Timothy Gallant, David I., Amna Zuberi. Technological Transparency in the Age of Web 2.0: A Case Study of Interactions in Internet-Based Forums. InTech, 2012 DOI: 10.5772/29082

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Perfecting email security

ScienceDaily (Sep. 10, 2012) — Millions of us send billions of emails back and forth each day without much concern for their security. On the whole, security is not a primary concern for most day-to-day emails, but some emails do contain personal, proprietary and sensitive information, documents, media, photos, videos and sound files. Unfortunately, the open nature of email means that they can be intercepted and if not encrypted easily read by malicious third parties. Even with the PGP -- pretty good privacy -- encryption scheme first used in 1995, if a sender's private "key" is compromised all their previous emails encrypted with that key can be exposed.

Writing in the International Journal of Security and Networks, computer scientists Duncan Wong and Xiaojian Tian of City University of Hong Kong, explain how previous researchers had attempted to define perfect email privacy that utilizes PGP by developing a technique that would preclude the decryption of other emails should a private key be compromised. Unfortunately, say Wong and Tian this definition fails if one allows the possibility that the email server itself may be compromised, by hackers or other malicious users.

The team has now defined perfect forward secrecy for email as follows and suggested a technical solution to enable email security to be independent of the server used to send the message: "An e-mail system provides perfect forward secrecy if any third party, including the e-mail server, cannot recover previous session keys between the sender and the recipient even if the long-term secret keys of the sender and the recipient are compromised."

By building a new email protocol on this principle, the team suggests that it is now possible to exchange emails with almost zero risk of interference from third parties. "Our protocol provides both confidentiality and message authentication in addition to perfect forward secrecy," they explain.

The team's protocol involves Alice sending Bob an encrypted email with the hope that Charles will not be able to intercept and decrypt the message. Before the email is encrypted and sent the protocol suggested by Wong and Tian has Alice's computer send an identification code to the email server. The server creates a random session "hash" that is then used to encrypt the actual encryption key for the email Alice is about to send. Meanwhile, Bob as putative recipient receives the key used to create the hash and bounces back an identification tag. This allows Alice and Bob to verify each other's identity.

These preliminary steps are all automatically and without Alice or Bob needing to do anything in advance. Now, Alice writes her email, encrypts it using PGP and then "hashes" it using the random key from the server. When Bob receives the encrypted message he uses his version of the hash to unlock the container within which the PGP-encrypted email sits. Bob then uses Alice's public PGP key to decrypt the message itself. No snoopers on the internet between Alice and Bob, not even the email server ever have access to the PGP encrypted email in the open. Moreover, because a different key is used to lock up the PGP encrypted email with a second one-time layer, even if the PGP security is compromised past emails created with the same key cannot be unlocked.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience Publishers, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Duncan S. Wong, Xiaojian Tian. E-mail protocols with perfect forward secrecy. International Journal of Security and Networks, 2012; 7 (1): 1 DOI: 10.1504/IJSN.2012.048491

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, October 24, 2012

Disaster is just a click away: Computer scientist, psychologist look at developing visual system to warn Internet users of safety risks

ScienceDaily (Sep. 11, 2012) — A Kansas State University computer scientist and psychologist are developing improved security warning messages that prompt users to go with their gut when it comes to making a decision online.

Eugene Vasserman, assistant professor of computing and information sciences, and Gary Brase, associate professor of psychology, are researching how to help computer users who have little to no computer experience improve their Web browsing safety without security-specific education. The goal is to keep users from making mistakes that could compromise their online security and to inform them when a security failure has happened.

"Security systems are very difficult to use, and staying safe online is a growing challenge for everyone," Vasserman said. "It is especially devastating to inexperienced computer users, who may not spot risk indicators and may misinterpret currently implemented textual explanations and visual feedback of risk."

Vasserman, whose expertise is in building secure networked systems, and Brase, who studies decision-making and the rationality behind people's choices, are developing a simple visual messaging system that would show novice computer users an easily understandable, relatable warning regarding their security decisions. These could be a choice to visit a website with an expired security certificate, or a website that is know to contain malware, among other online dangers. The idea is to have users make a gut reaction decision based on the message.

"The challenge is to get people to make the right decision," Vasserman said. "For example, sometimes a browser will show a dialog box saying this website has an expired SSL certificate, and sometimes the safer behavior is for people to still proceed and accept the expired certificate. But sometimes a website can pose a serious threat. We want people to make good choices without having to understand the technical detail, but we don't want to make the choice for them; we want to show them the importance and danger level of that choice."

Their project, "Education-optional Security Usability on the Internet," was recently awarded nearly $150,000 by the National Science Foundation. Researchers are using the funding to develop, test and evaluate the effectiveness of new and existing educational tools to find which ones case users to make better online security choices.

This system should minimize the use of traditional text warnings and icons, according to Vasserman.

The messaging system created will also likely be used in a medical project that Vasserman and colleagues are developing. The researchers are designing a secure network for hospitals and doctors' offices so medical devices can communicate with each other to monitor and relay information about a patient's health. Having a system that shows instantaneously recognizable consequences could be helpful to physicians and hospital engineers, who are not familiar with cybersecurity, make a correct decision quickly about what to do with a medical device that has a security problem.

"Presenting bad things with some sort of visual image is tricky because you want to convey to the user that this is not good, but you also don't want to traumatize them," Vasserman said. "For example, some people are terrified of snakes so that may be too intense of an image to use. When this is applied to a medical environment you have to especially conscious, so there are more considerations."

Prior to collaborating with Brase, Vasserman and Sumeet Gujrati, a doctoral candidate in computing and information sciences, tested the effectiveness of textual and visual communication for security messages and workflows.

Researchers spent more than 90 hours collecting data by observing volunteers use a piece of popular software that encrypts files on a computer.

The on-screen instructions asked users to select a location to store the encrypted files, but users often selected an existing file due to the phrasing of the instructions. This prompted an on-screen warning message stating that the selected file would be erased and all of the information inside of it would be lost. Users then had to decide to continue and erase the file or cancel the process and start over.

"I sat in the room many times and watched as people read the warning message carefully, sometimes even re-reading it, and then watched as they clicked on 'yes' and destroyed the file," Vasserman said. "Because the information being conveyed to them in the message was not immediately clear, many users specifically deleted the file they wanted to protect. I see that as an indicator that a text warning is not effective at getting users to make the correct choice."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Kansas State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

App protects Facebook users from hackers

ScienceDaily (Oct. 8, 2012) — Cyber-crime is expanding to the fertile grounds of social networks and University of California, Riverside engineers are fighting it.

A recent four-month experiment conducted by several UC Riverside engineering professors and graduate students found that the application they created to detect spam and malware posts on Facebook users' walls was highly accurate, fast and efficient.

The researchers also introduced the new term "socware" -- pronounced "sock-where" -- to describe a combination of "social malware," encompassing all criminal and parasitic behavior on online social networks.

Their free application, MyPageKeeper, successfully flagged 97 percent of socware during the experiment. In addition, it was only incorrect -- flagging posts of socware that did not fit into those categories -- 0.005 percent of the time.

The researchers also found that it took an average of .0046 seconds to classify a post, which is far quicker than the 1.9 seconds it takes using the traditional approach of web site crawling. MyPageKeeper's more efficient classification also translates to lower costs, cutting expenses by up to 40 times.

"This is really the perfect recipe for socware detection to be viable at scale: high accuracy, fast, and cheap," said Harsha V. Madhyastha, an assistant professor of computer science and engineering at UC Riverside's Bourns College of Engineering.

Madhyastha conducted the research with Michalis Faloutsos, a professor of computer science and engineering, and Md Sazzadur Rahman and Ting-Kai Huang, both Ph.D. students. Rahman presented the paper outlining the findings at the recent USENIX Security Symposium 2012.

During the four-month experiment, which was conducted from June to October 2011, the researchers analyzed more than 40 million posts from 12,000 people who installed MyPageKeeper. They found that 49 percent of users were exposed to at least one socware post during the four months.

"This is really an arms race with hackers," said Faloutsos, who has studied web security for more than 15 years. "In many ways, Facebook has replaced e-mail and web sites. Hackers are following that same path and we need new applications like MyPageKeeper to stop them."

The application, which is already attracting commercial interest, works by continuously scanning the walls and news feeds of subscribed users, identifying socware posts and alerting the users. In the future, the researchers are considering allowing MyPageKeeper to remove malicious posts automatically.

The key novelty of the application is that it factors in the "social context" of the post. Social context includes the words in the post and the number of "likes" and comments it received.

For example, the researchers determined that the presence of words -- such as 'FREE,' 'Hurry,' 'Deal' and 'Shocked' -- provide a strong indication of the post being spam. They found that the use of six of the top 100 keywords is sufficient to detect socware.

The researchers point out that users are unlikely to 'like' or comment on socware posts because they add little value. Hence, fewer likes or comments are also an indicator of socware.

Furthermore, MyPageKeeper checks URLs against domain lists that have been identified as being responsible for spam, phishing or malware. Any URL that matches is classified as socware.

During the four-month experiment, the researchers also found:

A consistently large number of socware notifications are sent every day, with noticeable spikes on a few days. For example, 4,056 notifications were sent on July 11, 2011, which corresponded to a scam that went viral conning users into completing surveys with the pretext of fake free products.Only 54 percent of socware links have been shortened by URL shorteners such as bit.ly and tinyurl.com. The researchers thought this number would be higher because URL shorteners allow the web site address to be hidden. They also found that many scams use somewhat obviously "fake" domain names, such as http://iphonefree5. com and http://nfljerseyfree. com, but users seem to fall for it and click the link.Certain words are much more likely to be found in Facebook socware than in e-mail spam. For example, "omg" is 332 times more likely to appear in Facebook socware. Meanwhile, "bank" is 56 times more likely to appear in e-mail spam.Twenty percent of socware links are hosted inside of Facebook.

This activity is so high that the researchers expect that Facebook will have to do more to protect its users against socware.

"Malware on Facebook seems to be hosted and enabled by Facebook itself," Faloutsos said. "It's a classic parasitic kind of behavior. It is fascinating and sad at the same time."

App: https://apps.facebook.com/mypagekeeper/

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Riverside. The original article was written by Sean Nealon.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, October 23, 2012

Tactile glove provides subtle guidance to objects in vicinity

ScienceDaily (Oct. 10, 2012) — Researchers at HIIT and Max Planck Institute for Informatics have shown how computer-vision based hand-tracking and vibration feedback on the user's hand can be used to steer the user's hand toward an object of interest. A study shows an almost three-fold advantage in finding objects from complex visual scenes, such as library or supermarket shelves.

Finding an object from a complex real-world scene is a common yet time-consuming and frustrating chore. What makes this task complex is that humans' pattern recognition capability reduces to a serial one-by-one search when the items resemble each other.

Researchers from the Helsinki Institute for Information Technology HIIT and the Max Planck Institute for Informatics have developed a prototype of a glove that uses vibration feedback on the hand to guide the user's hand towards a predetermined target in 3D space. The glove could help users in daily visual search tasks in supermarkets, parking lots, warehouses, libraries etc.

The main researcher, Ville Lehtinen of HIIT, explains "the advantage of steering a hand with tactile cues is that the user can easily interpret them in relation to the current field of view where the visual search is operating. This provides a very intuitive experience, like the hand being 'pulled' toward the target."

The solution builds on inexpensive off-the-shelf components such as four vibrotactile actuators on a simple glove and a Microsoft Kinect sensor for tracking the user's hand. The researchers published a dynamic guidance algorithm that calculates effective actuation patterns based on distance and direction to the target.

In a controlled experiment, the complexity of the visual search task was increased by adding distractors to a scene. "In search tasks where there were hundreds of candidates but only one correct target, users wearing the glove were consistently faster, with up to three times faster performance than without the glove," says Dr. Antti Oulasvirta from Max Planck Institute for Informatics.

Dr. Petteri Nurmi from HIIT adds: "This level of improvement in search performance justifies several practical applications. For instance, warehouse workers could have gloves that guide them to target shelfs, or a pedestrian could navigate using this glove. With the relatively inexpensive components and the dynamic guidance algorithm, others can easily build their own personal guidance systems."

The research paper will be presented at the 25th ACM Symposium on User Interface Software and Technology UIST'12 in Boston, MA, USA on 7/10 October, 2012.

.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Helsingin yliopisto (University of Helsinki), via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, October 22, 2012

Negative effects of computerized surveillance at home: Cause of annoyance, concern, anxiety, and even anger

ScienceDaily (Oct. 8, 2012) — To understand the effects of continuous computerized surveillance on individuals, a Finnish research group instrumented ten Finnish households with video cameras, microphones, and logging software for personal computers, wireless networks, smartphones, TVs, and DVDs. The twelve participants filled monthly questionnaires to report on stress levels and were interviewed at six and twelve months. The study was carried out by Helsinki Institute for Information Technology HIIT, a joint research institute of Aalto University and the University of Helsinki, Finland.

The results expose a range of negative changes in experience and behavior. To all except one participant, the surveillance system proved to be a cause of annoyance, concern, anxiety, and even anger. However, surveillance did not cause mental health issues comparable in severity to depression or alcoholism, when measured with a standardized scale. Nevertheless, one household dropped out of the study at six months, citing that the breach of privacy and anonymity had grown unbearable.

The surveillees' privacy concerns plateaued after about three months, as the surveillees got more used to surveillance. The researchers attribute this to behavioral regulation of privacy. Almost all subjects exhibited changes in behavior to control what the system perceives. Some hid their activities in the home from the sensors, while some transferred them to places outside the home. Dr. Antti Oulasvirta explains: -- Although almost all were capable of adapting their daily practices to maintain privacy intrusion at a level they could tolerate, the required changes made the home fragile. Any unpredicted social event would bring the new practices to the fore and question them, and at times prevent them from taking place.

The researchers were surprised that computer logging was as disturbing as camera-based surveillance. On the one hand, logging the computer was experienced negatively because it breaches the anonymity of conversations. -- The importance of anonymity in computer use is symptomatic of the fact that a large proportion of our social activities today are mediated by computers, Oulasvirta says.

On the other hand, the ever-observing "eye," the video camera, deprived the participants of the solitude and isolation they expect at home. The surveillees felt particularly strong the violation of reserve and intimacy by the capture of nudity, physical appearance, and sex. -- Psychological theories of privacy have postulated six privacy functions of the home, and we find that computerized surveillance can disturb all of them, Oulasvirta concludes.

More experimental research is needed to reveal the effects of computerized surveillance. Prof. Petri Myllymäki explains: -- Because the topic is challenging to study empirically, there is hardly any published research on the effects of intrusive surveillance on everyday life. In the Helsinki Privacy Experiment project, we did rigorous ethical and legal preparations, and invested into a robust technical platform, in order to allow a longitudinal field experiment of privacy. The present sample of subjects is potentially biased, as it was selected from people who volunteered based on an Internet advertisement. While we realize the limits of our sample, our work can facilitate further inquiries into this important subject.

The first results were presented at the 14th International Conference on Ubiquitous Computing (Ubicomp 2012) in Pittsburgh, PA, USA.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Aalto University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Sunday, October 21, 2012

Artificially intelligent game bots pass the Turing test on Turing's centenary

ScienceDaily (Sep. 26, 2012) — An artificially intelligent virtual gamer created by computer scientists at The University of Texas at Austin has won the BotPrize by convincing a panel of judges that it was more human-like than half the humans it competed against.

The competition was sponsored by 2K Games and was set inside the virtual world of "Unreal Tournament 2004," a first-person shooter video game. The winners were announced this month at the IEEE Conference on Computational Intelligence and Games.

"The idea is to evaluate how we can make game bots, which are nonplayer characters (NPCs) controlled by AI algorithms, appear as human as possible," said Risto Miikkulainen, professor of computer science in the College of Natural Sciences. Miikkulainen created the bot, called the UT^2 game bot, with doctoral students Jacob Schrum and Igor Karpov.

The bots face off in a tournament against one another and about an equal number of humans, with each player trying to score points by eliminating its opponents. Each player also has a "judging gun" in addition to its usual complement of weapons. That gun is used to tag opponents as human or bot.

The bot that is scored as most human-like by the human judges is named the winner. UT^2, which won a warm-up competition last month, shared the honors with MirrorBot, which was programmed by Romanian computer scientist Mihai Polceanu.

The winning bots both achieved a humanness rating of 52 percent. Human players received an average humanness rating of only 40 percent. The two winning teams will split the $7,000 first prize.

The victory comes 100 years after the birth of mathematician and computer scientist Alan Turing, whose "Turing test" stands as one of the foundational definitions of what constitutes true machine intelligence. Turing argued that we will never be able to see inside a machine's hypothetical consciousness, so the best measure of machine sentience is whether it can fool us into believing it is human.

"When this 'Turing test for game bots' competition was started, the goal was 50 percent humanness," said Miikkulainen. "It took us five years to get there, but that level was finally reached last week, and it's not a fluke."

The complex gameplay and 3-D environments of "Unreal Tournament 2004" require that bots mimic humans in a number of ways, including moving around in 3-D space, engaging in chaotic combat against multiple opponents and reasoning about the best strategy at any given point in the game. Even displays of distinctively human irrational behavior can, in some cases, be emulated.

"People tend to tenaciously pursue specific opponents without regard for optimality," said Schrum. "When humans have a grudge, they'll chase after an enemy even when it's not in their interests. We can mimic that behavior."

In order to most convincingly mimic as much of the range of human behavior as possible, the team takes a two-pronged approach. Some behavior is modeled directly on previously observed human behavior, while the central battle behaviors are developed through a process called neuroevolution, which runs artificially intelligent neural networks through a survival-of-the-fittest gauntlet that is modeled on the biological process of evolution.

Networks that thrive in a given environment are kept, and the less fit are thrown away. The holes in the population are filled by copies of the fit ones and by their "offspring," which are created by randomly modifying (mutating) the survivors. The simulation is run for as many generations as are necessary for networks to emerge that have evolved the desired behavior.

"In the case of the BotPrize," said Schrum, "a great deal of the challenge is in defining what 'human-like' is, and then setting constraints upon the neural networks so that they evolve toward that behavior.

"If we just set the goal as eliminating one's enemies, a bot will evolve toward having perfect aim, which is not very human-like. So we impose constraints on the bot's aim, such that rapid movements and long distances decrease accuracy. By evolving for good performance under such behavioral constraints, the bot's skill is optimized within human limitations, resulting in behavior that is good but still human-like."

Miikkulainen said that methods developed for the BotPrize competition should eventually be useful not just in developing games that are more entertaining, but also in creating virtual training environments that are more realistic, and even in building robots that interact with humans in more pleasant and effective ways.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Texas at Austin, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, October 20, 2012

Popularity versus similarity: A balance that predicts network growth

ScienceDaily (Sep. 13, 2012) — Do you know who Michael Jackson or George Washington was? You most likely do: they are what we call "household names" because these individuals were so ubiquitous. But what about Giuseppe Tartini or John Bachar?

That's much less likely, unless you are a fan of Italian baroque music or free solo climbing.

In that case, you would have heard of Bachar just as likely as Washington. The latter was popular, while the former was not as popular but had interests similar to yours.

A new paper published this week in the science journal Nature by the Cooperative Association for Internet Data Analysis (CAIDA), based at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, explores the concept of popularity versus similarity, and if one more than the other fuels the growth of a variety of networks, whether it is the Internet, a social network of trust between people, or a biological network.

The researchers, in a study called "Popularity Versus Similarity in Growing Networks", show for the first time how networks evolve optimizing a unique trade-off between popularity and similarity. They found that while popularity attracts new connections, similarity is just as attractive.

"Popular nodes in a network, or those that are more connected than others, tend to attract more new connections in growing networks," said Dmitri Krioukov, co-author of the Nature paper and a research scientist with SDSC's CAIDA group, which studies the practical and theoretical aspects of the Internet and other large networks. "But similarity between nodes is just as important because it is instrumental in determining precisely how these networks grow. Accounting for these similarities can help us better predict the creation of new links in evolving networks."

In the paper, Krioukov and his colleagues, which include network analysis experts from academic institutions in Cyprus and Spain, describe a new model that significantly increases the accuracy of network evolution prediction by considering the trade-offs between popularity and similarity. Their model describes large-scale evolution of three kinds of networks: technological (the Internet), social (a network of trust relationships between people), and biological (a metabolic network of the Escherichia coli, typically harmlessly found in the human gastrointestinal tract, though some strains can cause diarrheal diseases.)

The researchers write that the model's ability to predict links in networks may find applications ranging from predicting protein interactions or terrorist connections to improving recommender and collaborative filtering systems, such as Netflix or Amazon product recommendations.

"On a more general note, if we know the laws describing the dynamics of a complex system, then we not only can predict its behavior, but we may also find ways to better control it," added Krioukov.

In establishing connections in networks, nodes optimize certain trade-offs between the two dimensions of popularity and similarity, according to the researchers. "These two dimensions can be combined or mapped into a single space, and this mapping allows us to predict the probability of connections in networks with a remarkable accuracy," said Krioukov. "Not only can we capture all the structural properties of three very different networks, but also their large-scale growth dynamics. In short, these networks evolve almost exactly as our model predicts."

Many factors contribute to the probability of connections between nodes in real networks. In the Internet, for example, this probability depends on geographic, economic, political, technological, and many other factors, many of which are un-measurable or even unknown.

"The beauty of the new model is that it accounts for all of these factors, and projects them, properly weighted, into a single metric, while allowing us to predict the probability of new links with a high degree of precision," according to Krioukov.

The other researchers who worked on this project are Fragkiskos Papadopoulos, Department of Electrical Engineering, Computer Engineering and Informatics, Cyprus University of Technology in Cyprus; Maksim Kitsak, CAIDA/SDSC/UC San Diego; M. Ángeles Serrano and Marián Boguñá, Departament de Fisica Fonamental, Univsitat de Barcelona, in Spain.

This research was supported by a variety of grants, including National Science Foundation (NSF) grants CNS-0964236, CNS-1039646, and CNS-0722070; Department of Homeland Security (DHS) grant N66001-08-C-2029; Defense Advanced Research Projects Agency (DARPA) grant HR0011-12-1-0012; and support from Cisco Systems.

International support was provided by a Marie Curie International Reintegration Grant within the 7th European Community Framework Programme; Office of the Ministry of Economy and Competitiveness, Spain (MICINN) projects FIS2010-21781-C02-02 and BFU2010-21847-C02-02; Generalitat de Catalunya grant 2009SGR838; the Ramón y Cajal program of the Spanish Ministry of Science; and the Catalan Institution for Research and Advanced Studies (ICREA) Academia prize 2010, funded by the Generalitat de Catalunya, Spain.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California, San Diego. The original article was written by Jan Zverina.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Fragkiskos Papadopoulos, Maksim Kitsak, M. Ángeles Serrano, Marián Boguñá, Dmitri Krioukov. Popularity versus similarity in growing networks. Nature, 2012; DOI: 10.1038/nature11459

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, October 18, 2012

Training computers to understand the human brain

ScienceDaily (Oct. 5, 2012) — Tokyo Institute of Technology researchers use fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.

Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can 'think' and 'see' in the same way as humans. Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.

The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently 'label' each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool).

After 'training' the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Tokyo Institute of Technology, via ResearchSEA.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Hiroyuki Akama, Brian Murphy, Li Na, Yumiko Shimizu, Massimo Poesio. Decoding semantics across fMRI sessions with different stimulus modalities: a practical MVPA study. Frontiers in Neuroinformatics, 2012; 6 DOI: 10.3389/fninf.2012.00024

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Computer, read my lips: Emotion detector developed using a genetic algorithm

ScienceDaily (Sep. 10, 2012) — A computer is being taught to interpret human emotions based on lip pattern, according to research published in the International Journal of Artificial Intelligence and Soft Computing. The system could improve the way we interact with computers and perhaps allow disabled people to use computer-based communications devices, such as voice synthesizers, more effectively and more efficiently.

Karthigayan Muthukaruppanof Manipal International University in Selangor, Malaysia, and co-workers have developed a system using a genetic algorithm that gets better and better with each iteration to match irregular ellipse fitting equations to the shape of the human mouth displaying different emotions. They have used photos of individuals from South-East Asia and Japan to train a computer to recognize the six commonly accepted human emotions -- happiness, sadness, fear, angry, disgust, surprise -- and a neutral expression. The upper and lower lip is each analyzed as two separate ellipses by the algorithm.

"In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers especially in the area of human emotion recognition by observing facial expression," the team explains. Earlier researchers have developed an understanding that allows emotion to be recreated by manipulating a representation of the human face on a computer screen. Such research is currently informing the development of more realistic animated actors and even the behavior of robots. However, the inverse process in which a computer recognizes the emotion behind a real human face is still a difficult problem to tackle.

It is well known that many deeper emotions are betrayed by more than movements of the mouth. A genuine smile for instance involves flexing of muscles around the eyes and eyebrow movements are almost universally essential to the subconscious interpretation of a person's feelings. However, the lips remain a crucial part of the outward expression of emotion. The team's algorithm can successfully classify the seven emotions and a neutral expression described.

The researchers suggest that initial applications of such an emotion detector might be helping disabled patients lacking speech to interact more effectively with computer-based communication devices, for instance.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience Publishers, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

M. Karthigayan, R. Nagarajan, M. Rizon, Sazali Yaacob. Lip pattern in the interpretation of human emotions. International Journal of Artificial Intelligence and Soft Computing, 2012; 3 (2): 95 DOI: 10.1504/IJAISC.2012.049004

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

An operating system in the cloud: TransOS could displace conventional desktop operating systems

ScienceDaily (Oct. 9, 2012) — A new-cloud based operating system for all kinds of computer is being developed by researchers in China. Details of the TransOS system are reported in a forthcoming special issue of the International Journal of Cloud Computing.

Computer users are familiar to different degrees with the operating system that gets their machines up and running, whether that is the Microsoft Windows, Apple Mac, Linux, ChromeOS or other operating system. The OS handles the links between hardware, the CPU, memory, hard drive, peripherals such as printers and cameras as well as the components that connect the computer to the Internet, critically it also allows the user to run the various bits of software and applications they need, such as their email programs, web browsers, word processors, spreadsheets and games.

While, operating systems seem firmly entrenched in the personal computer and their files, documents, movies, sounds and images, sit deep within the hard drive. Traditionally, software too is stored on the same hard drive for quick access to the programs a user needs at any given time. However, there is a growing movement that is taking the applications off the personal hard drive and putting them "in the cloud."

The user connects to the Internet and "runs" the software as and when needed from a cloud server, perhaps even storing their files in the cloud too. This has numerous advantages for the user. First, the software can be kept up to date automatically without their intervention. Secondly, the software is independent of the hardware and operating system and so can be run from almost any computer with an Internet connection. Thirdly, if the user files are also in the cloud, then they can access and use their files anywhere in the world with a network connection and at any time.

The obvious next step is to make the entire process transparent by stripping the operating system from the computer and putting that in the cloud. The computer then becomes a sophisticated, but dummy terminal and its configuration and capabilities become irrelevant to how the user interacts with their files. Already most types of software are represented in the cloud by alternative or additional versions of their desktop equivalents but we are yet to see a fully functional cloud-based OS. For instance, systems such as Java were developed to allow applications to run in a web browser irrespective of the computer or operating system on which that browser was running.

Now, Yaoxue Zhang and Yuezhi Zhou of Tsinghua University, in Beijing, China, have at last developed an operating system for the cloud -- TransOS. The operating system code is stored on a cloud server and allows a connection from a bare terminal computer. The terminal has a minimal amount of code that boots it up and connects it to the Internet dynamically. TransOS then downloads specific pieces of code that offer the user options as if they were running a conventional operating system via a graphical user interface. Applications are then run, calling on the TransOS code only as needed so that memory is not hogged by inactive operating system code as it is by a conventional desktop computer operating system.

"TransOS manages all the resources to provide integrated services for users, including traditional operating systems," the team says. "The TransOS manages all the networked and virtualized hardware and software resources, including traditional OS, physical and virtualized underlying hardware resources, and enables users can select and run any service on demand," the team says

The researchers suggest that TransOS need not be limited to personal computers, but offers the capacity to be enabled on other domestic (refrigerators and washing machines, for instance) and factory equipment. The concept should also work well with mobile devices, such as phones and tablet PCs. It is essential, the team adds, that a cloud operating system architecture and relevant interface standards now be established to allow TransOS to be developed for a vast range of applications.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Inderscience Publishers, via EurekAlert!, a service of AAAS.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Yaoxue Zhang, Yuezhi Zhou. TransOS: a transparent computing-based operating system for the cloud. Int. J. Cloud Computing, 2012, 1, 287-301

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here