Google Search

Showing posts with label tools. Show all posts
Showing posts with label tools. Show all posts

Monday, October 15, 2012

Robots using tools: Researchers aim to create 'MacGyver' robot

ScienceDaily (Oct. 9, 2012) — Robots are increasingly being used in place of humans to explore hazardous and difficult-to-access environments, but they aren't yet able to interact with their environments as well as humans. If today's most sophisticated robot was trapped in a burning room by a jammed door, it would probably not know how to locate and use objects in the room to climb over any debris, pry open the door, and escape the building.

A research team led by Professor Mike Stilman at the Georgia Institute of Technology hopes to change that by giving robots the ability to use objects in their environments to accomplish high-level tasks. The team recently received a three-year, $900,000 grant from the Office of Naval Research to work on this project.

"Our goal is to develop a robot that behaves like MacGyver, the television character from the 1980s who solved complex problems and escaped dangerous situations by using everyday objects and materials he found at hand," said Stilman, an assistant professor in the School of Interactive Computing at Georgia Tech. "We want to understand the basic cognitive processes that allow humans to take advantage of arbitrary objects in their environments as tools. We will achieve this by designing algorithms for robots that make tasks that are impossible for a robot alone possible for a robot with tools."

The research will build on Stilman's previous work on navigation among movable obstacles that enabled robots to autonomously recognize and move obstacles that were in the way of their getting from point A to point B.

"This project is challenging because there is a critical difference between moving objects out of the way and using objects to make a way," explained Stilman. "Researchers in the robot motion planning field have traditionally used computerized vision systems to locate objects in a cluttered environment to plan collision-free paths, but these systems have not provided any information about the objects' functions."

To create a robot capable of using objects in its environment to accomplish a task, Stilman plans to develop an algorithm that will allow a robot to identify an arbitrary object in a room, determine the object's potential function, and turn that object into a simple machine that can be used to complete an action. Actions could include using a chair to reach something high, bracing a ladder against a bookshelf, stacking boxes to climb over something, and building levers or bridges from random debris.

By providing the robot with basic knowledge of rigid body mechanics and simple machines, the robot should be able to autonomously determine the mechanical force properties of an object and construct motion plans for using the object to perform high-level tasks.

For example, exiting a burning room with a jammed door would require a robot to travel around any fire, use an object in the room to apply sufficient force to open the stuck door, and locate an object in the room that will support its weight while it moves to get out of the room.

Such skills could be extremely valuable in the future as robots work side-by-side with military personnel to accomplish challenging missions.

"The Navy prides itself on recruiting, training and deploying our country's most resourceful and intelligent men and women," said Paul Bello, director of the cognitive science program in the Office of Naval Research (ONR). "Now that robotic systems are becoming more pervasive as teammates for warfighters in military operations, we must ensure that they are both intelligent and resourceful. Professor Stilman's work on the 'MacGyver-bot' is the first of its kind, and is already beginning to deliver on the promise of mechanical teammates able to creatively perform in high-stakes situations."

To address the complexity of the human-like reasoning required for this type of scenario, Stilman is collaborating with researchers Pat Langley and Dongkyu Choi. Langley is the director of the Institute for the Study of Learning and Expertise (ISLE), and is recognized as a co-founder of the field of machine learning, where he championed both experimental studies of learning algorithms and their application to real-world problems. Choi is an assistant professor in the Department of Aerospace Engineering at the University of Kansas.

Langley and Choi will expand the cognitive architecture they developed, called ICARUS, which provides an infrastructure for modeling various human capabilities like perception, inference, performance and learning in robots.

"We believe a hybrid reasoning system that embeds our physics-based algorithms within a cognitive architecture will create a more general, efficient and structured control system for our robot that will accrue more benefits than if we used one approach alone," said Stilman.

After the researchers develop and optimize the hybrid reasoning system using computer simulations, they plan to test the software using Golem Krang, a humanoid robot designed and built in Stilman's laboratory to study whole-body robotic planning and control.

This research is sponsored by the Department of the Navy, Office of Naval Research, through grant number N00014-12-1-0143. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Office of Naval Research.

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Georgia Institute of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, June 14, 2012

Self-sculpting sand: Heaps of 'smart sand’ could assume any shape, form new tools or duplicatie broken parts

ScienceDaily (Apr. 2, 2012) — Imagine that you have a big box of sand in which you bury a tiny model of a footstool. A few seconds later, you reach into the box and pull out a full-size footstool: The sand has assembled itself into a large-scale replica of the model.

That may sound like a scene from a Harry Potter novel, but it's the vision animating a research project at the Distributed Robotics Laboratory (DRL) at MIT's Computer Science and Artificial Intelligence Laboratory. At the IEEE International Conference on Robotics and Automation in May DRL researchers will present a paper describing algorithms that could enable such "smart sand." They also describe experiments in which they tested the algorithms on somewhat larger particles -- cubes about 10 millimeters to an edge, with rudimentary microprocessors inside and very unusual magnets on four of their sides.

Unlike many other approaches to reconfigurable robots, smart sand uses a subtractive method, akin to stone carving, rather than an additive method, akin to snapping LEGO blocks together. A heap of smart sand would be analogous to the rough block of stone that a sculptor begins with. The individual grains would pass messages back and forth and selectively attach to each other to form a three-dimensional object; the grains not necessary to build that object would simply fall away. When the object had served its purpose, it would be returned to the heap. Its constituent grains would detach from each other, becoming free to participate in the formation of a new shape.

Distributed intelligence

Algorithmically, the main challenge in developing smart sand is that the individual grains would have very few computational resources. "How do you develop efficient algorithms that do not waste any information at the level of communication and at the level of storage?" asks Daniela Rus, a professor of computer science and engineering at MIT and a co-author on the new paper, together with her student Kyle Gilpin. If every grain could simply store a digital map of the object to be assembled, "then I can come up with an algorithm in a very easy way," Rus says. "But we would like to solve the problem without that requirement, because that requirement is simply unrealistic when you're talking about modules at this scale." Furthermore, Rus says, from one run to the next, the grains in the heap will be jumbled together in a completely different way. "We'd like to not have to know ahead of time what our block looks like," Rus says.

Conveying shape information to the heap with a simple physical model -- such as the tiny footstool -- helps address both of these problems. To get a sense of how the researchers' algorithm works, it's probably easiest to consider the two-dimensional case. Picture each grain of sand as a square in a two-dimensional grid. Now imagine that some of the squares -- say, in the shape of a footstool -- are missing. That's where the physical model is embedded.

According to Gilpin-author on the new paper, the grains first pass messages to each other to determine which have missing neighbors. (In the grid model, each square could have eight neighbors.) Grains with missing neighbors are in one of two places: the perimeter of the heap or the perimeter of the embedded shape.

Once the grains surrounding the embedded shape identify themselves, they simply pass messages to other grains a fixed distance away, which in turn identify themselves as defining the perimeter of the duplicate. If the duplicate is supposed to be 10 times the size of the original, each square surrounding the embedded shape will map to 10 squares of the duplicate's perimeter. Once the perimeter of the duplicate is established, the grains outside it can disconnect from their neighbors.

Rapid prototyping

The same algorithm can be varied to produce multiple, similarly sized copies of a sample shape, or to produce a single, large copy of a large object. "Say the tire rod in your car has sheared," Gilpin says. "You could duct tape it back together, put it into your system and get a new one."

The cubes -- or "smart pebbles" -- that Gilpin and Rus built to test their algorithm enact the simplified, two-dimensional version of the system. Four faces of each cube are studded with so-called electropermanent magnets, materials that can be magnetized or demagnetized with a single electric pulse. Unlike permanent magnets, they can be turned on and off; unlike electromagnets, they don't require a constant current to maintain their magnetism. The pebbles use the magnets not only to connect to each other but also to communicate and to share power. Each pebble also has a tiny microprocessor, which can store just 32 kilobytes of program code and has only two kilobytes of working memory.

The pebbles have magnets on only four faces, Gilpin explains, because, with the addition of the microprocessor and circuitry to regulate power, "there just wasn't room for two more magnets." But Gilpin and Rus performed computer simulations showing that their algorithm would work with a three-dimensional block of cubes, too, by treating each layer of the block as its own two-dimensional grid. The cubes discarded from the final shape would simply disconnect from the cubes above and below them as well as those next to them.

True smart sand, of course, would require grains much smaller than 10-millimeter cubes. But according to Robert Wood, an associate professor of electrical engineering at Harvard University, that's not an insurmountable obstacle. "Take the core functionalities of their pebbles," says Wood, who directs Harvard's Microrobotics Laboratory. "They have the ability to latch onto their neighbors; they have the ability to talk to their neighbors; they have the ability to do some computation. Those are all things that are certainly feasible to think about doing in smaller packages."

"It would take quite a lot of engineering to do that, of course," Wood cautions. "That's a well-posed but very difficult set of engineering challenges that they could continue to address in the future."

Share this story on Facebook, Twitter, and Google:

Other social bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology. The original article was written by Larry Hardesty, MIT News Office.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here