Here’s a cool interface system developed by a team of students from Tokyo University, Keio University, and Singapore National University that I came across digging into the Foldy story from yesterday. It’s a human-robot interface that allows a human to draw commands using a laser pointer, which a robot then executes. Like Foldy, it’s part of JST ERATO’s Design UI Project.
So far, a person can command a Roomba to move an object to an arbitrary location, move several objects to a specific location, or have it push the object into a designated “trash zone”. The Roomba performs these actions by simply pushing the objects around based on the strokes made with the laser pointer.
Besides the laser pointer and Roomba, the system uses an overhead projector and speaker (for feedback), overhead cameras to track the position of the laser, robot, and objects, and a couple of PCs which wirelessly communicate with the Roomba. It sounds like a complicated set-up, but it looks like a perfectly natural interface, and the researchers claim anyone can pick it up in about 10 minutes. Check out the video: