menu
robots A-Zrobot timelinegamestv & moviesfeaturescontactfacebooktwitteryoutubetumblrrss feed
bg logo logobump



• ARMAR-III

Continuing to work on a humanoid helper robot called ARMAR, the Collaborative Research Center 588: Humanoid Robots at the University of Karlsruhe began planning ARMAR-IIIa (blue) in 2006. It has 43 degrees of freedom (torso x3, 2 arms x7, 2 hands x8, head x7) and is equipped with position, velocity, and force sensors.  The upper-body has a modular design based on the average dimensions of a person, with 14 tactile sensors per hand.  Like the previous versions, it moves on a mobile platform.  In 2008 they built a slightly upgraded version of the robot called ARMAR-IIIb (red).  Both robots use the Karlsruhe Humanoid Head, which has 2 cameras per eye (for near and far vision).  The head has a total of 7 degrees of freedom (neck x4, eyes x3), 6 microphones, and a 6D inertial sensor.

Besides developing mechatronic components for the robots, the researchers also want the robot to be capable of interacting naturally with people.  They are focusing on interactive learning (through observation, verbal instruction, and gesture) which requires multimodal interaction (speech recognition, dialog processing, and visual perception of the human instructor).  The robot is therefore able to learn  to recognize new words, objects, and people.  Using a motion-capture set-up, the robot is also learning to recognize how a person moves and manipulates objects by tracking and analyzing key points on the human body (such as joint positions) as the person performs a task.

Manipulating objects is one of the most complex tasks for robots. Motion trajectory, grasping, and manipulation planning, both in simulation and in a realistic kitchen setting, is being carried out with ARMAR-III.  First, a simulation of the robot’s hand tests the various ways it can firmly hold 3D models of standard household objects.  At the time there was no simulation software suitable for their needs, so the team had to build their own!  They called it OpenGrasp, which is based on the modular simulation toolkit OpenRave.  The software is compatible with a variety of physics engines and programming languages.

OpenGrasp allowed them to simulate the kinematics of the robot hand, the object’s shape, material properties, forces and moments, as well as obstacles near the object.  The simulation can then try to determine the most stable grip (out of a very large number of possibilities) taking into account exactly where the robot’s fingers will contact the object.  Once satisfied with the simulation results, they’re able to reliably perform collision-free one-handed and two-handed manipulation tasks with the real robot using visual servoing.

As part of an international 4-year project called PACO-PLUS (Perception, Action, and Cognition through learning of Object-Action Complexes), the researchers have tried a new approach to embodied cognition that couples objects with the actions that can be performed with them.  Object-Action Complexes or OACs, a system of object recognition and classification, not only allow the robot to build a kind of understanding of objects, but can also lead to the development of language, as OACs (mutually grounded symbols) are shared between other robots or people.

Children learn to classify objects through all sorts of interactions, such as putting things into their mouth.  Much can be learned about an object that way; its size and weight, taste, texture, temperature, and whether it is hard or soft.  Much in the same way, ARMAR-III can use all of its sensors to detect and classify objects in a process called Proprioceptive Learning.  Its cameras can detect features, shapes, and colors, while its tactile sensors can measure an object’s rigidity or softness.  Microphones can listen to the sounds an object makes as it is manipulated (such as pebbles rolling inside a bottle), and in the future, smell and taste may play a role.  Like children interacting with a parent or teacher, this learning process can be sped up by a human instructor that tells the robot what color an object is, or by guiding their arm when opening a drawer.  In time, the robot will have built a large library of OACs that make use of an object’s unique properties.

ARMAR-III was able to fetch a cereal box from an unknown location, and stack cups by color, in the laboratory kitchen.  It also learned  more complex tasks such as how to load and unload a dishwasher (see videos below).  These sorts of behaviors will be required of household helper robots in the future, and the PACO-PLUS approach seems to be a successful method of developing them.

[source: Karlsruhe Institute of Technology] via [IEEE Spectrum]

Media

Video (Mirror):

YouTube Preview Image

Video (Mimicking human poses) (Mirror):

YouTube Preview Image

Video (Unloading a dishwasher) (Mirror):

YouTube Preview Image

Video (Haptic object verification and deformation investigation) (Mirror):

YouTube Preview Image

Video (Dual-arm manipulation) (Mirror):

YouTube Preview Image

Video (Visually searching for Kellogg’s Frosties cereal) (Mirror):

YouTube Preview Image

Images:

  • Pingback: Tech Universe Digest for 16 to 20 May 2011 — KnowIT

  • Pingback: Серия роботов помощников: ARMAR III » Robotor

  • alex

    The problem is we don’t know how the robot is programmed. You can see that probably he can’t recognize the cups unless they are colored – and that looks very cheap. Some stuff can look convincing -i.e. opening the door, taking the rack out- but it could be just a routine. Those colored cups are suspicious that he doesn’t have much intelligence.

  • alex

    Looks like each stuff the robot has to learn they program it how exactly he has to learn it. I don’t think that’s the right way because there’s no intelligence. For each thing to learn he has a preprogrammed routine so it’s just an upgrade to those robots that run only preprogrammed routines. They should throw all that OpenXXX stuff away and begin from scratch to work on a real artificial intelligence – i.e. make the learning more universal not 1 thing to learn = 1 routine.

    • Robotbling

      Hm, I think that assessment may be a bit unfair. After all, the dishwasher routine is pretty complicated. The dishes may be positioned unexpectedly, and their individual locations would be unknown. The robot would have to use its visual servoing and its knowledge of each object in order to grasp each one properly.

      Also, the demo where the robot searches for an object and retrieves it is pretty complicated too for similar reasons. It seems like a good method for solving some of these basic problems.