menu
robots A-Zrobot timelinegamestv & moviesfeaturescontactfacebooktwitteryoutubetumblrrss feed
bg logo logobump



UPenn Robotics’ Recent Work With DARwIn-OP

Researchers at the University of Pennsylvania are working with the DARwIn-OP (which they helped to create) on a couple of fun projects that are worth a look.

The first one by Steve McGill is called Collaborative Mobile Manipulation Among Humanoid Robots, where two DARwIn-OPs have to work together to lift and carry a small “stretcher”.  The two robots use their single onboard cameras to recognize the stretcher (color-coded to make the task easier) and estimate its relative position and orientation.  The robots then have to take up specific positions next to it in order to be able to grasp its handles.  If they don’t stand in the right spot, they’re claw-like hands will miss the handles and the task is failed.

The robots communicate over wi-fi to let each other know when they’re ready to pick it up – allowing them to synchronize their actions.  Once the stretcher is in hand, if the two robots move asynchronously their motions will cause them to drop it.  So the robots also have to synchronize their walking gaits in order to move it around. It’s a fun and challenging project that still needs to be fine-tuned, but check it out.

Video (Teamwork):

YouTube Preview Image

The second project is by Yida Zhang, called Real-Time Obstacle Detection for Humanoid Robot with Single Camera.  Here, a DARwIn-OP uses the RGB channel of its single onboard camera and its limited embedded PC to detect the position of obstacles in real-time.  This is made more difficult due to the distortion in the video images caused by the robot moving around.  For the time being, the obstacles are the same size, shape, and color.  As an extra, Yida also worked out some motion planning that allows the robot to work its way towards a goal position.

As Yida points out, RoboCup’s humanoid league prohibits the use of sensors commonly used for this sort of task (such as laser range finders or the Kinect sensor), and data from these sensors is computationally too expensive to run onboard the robots anyway (for now, at least).  This type of work may be used in future RoboCup matches where detecting the position of the ball, the goal, and the opponents is essential.

Video (Obstacle avoidance):

YouTube Preview Image

Incidentally, RoboTimes has published videos from the Japanese demonstration of DARwIn-OP that took place earlier this year (watch them after the break).

[source: Penn Robotics]

Video (Balance):

YouTube Preview Image

Video (High speed knee bends):

YouTube Preview Image

Video (Being kicked around):

YouTube Preview Image

[source: RoboTimes @ YouTube]

Comments are closed.