menu
robots A-Zrobot timelinegamestv & moviesfeaturescontactfacebooktwitteryoutubetumblrrss feed
bg logo logobump



NAO Learns to Autonomously Avoid Obstacles

Researchers at the Albert-Ludwigs-Universität Freiburg’s Humanoid Robots Lab presented some work at ICRA 2011 that will be of interest to NAO developers.  This lab co-developed the laser range finder attachment with Aldebaran Robotics, and has developed a method for detecting obstacles by combining data from that attachment with NAO’s monocular vision.  Due to the laser range finder being positioned atop NAO’s head, it can’t detect obstacles right in front of the robot, which increases the risk of bumping into an unknown object.  Before, the robot had to make frequent stops to scan ahead, which is a pretty inefficient way to get around.

The new method uses the camera images as much as possible to avoid time-consuming laser scans.  The laser scan first finds the ground plane, and anything that extends above it is labeled an obstacle.  The obstacles are then classified based on their color and texture so that NAO can detect them using just the camera.  NAO can then build an occupancy grid to plan its path.  An occupancy grid is a 2D map seen from a bird’s eye view that plots the robot’s position and marks obstacles on the grid as occupied (non-traversable), allowing the robot to plan a path around them.  Using this method, NAO was able to reach its goal much faster than by relying on data solely from the laser range finder (which requires frequent stops).

Researchers at the same lab also took the opportunity at ICRA 2011 to present their footstep planner for NAO.  More details can be found at the ROS wiki entry.

Video (NAO footstep planner):

YouTube Preview Image

The authors cite another method that doesn’t rely on an expensive laser range finder, but unfortunately it is unsuitable for NAO.  Similar results can be obtained when a robot is equipped with stereo vision, as shown by the SONY QRIO.  The disparities between the two camera images can be used to determine an object’s distance to the camera, and allows the robot to extract the ground plane.  Most objects that sit on top of the ground plane can be seen as obstacles.  This method also uses an occupancy grid which allows the robot to plan a path towards its goal.  The researchers at SONY even programmed QRIO to shuffle side ways to squeeze past obstacles that were very close together, as well as negotiate small height differences (such as QRIO-sized stairs).

Video (2008: QRIO builds an occupancy grid using stereoscopic vision):

YouTube Preview Image

[source: ICRA 2011] & [IJRR]

 

Comments are closed.