UPDATED 11:51 EST / DECEMBER 29 2010

Robotics Expert Uses Kinect to “Connect” Human and Machine

xbox-kinect-robot-overlord Thank you, Japan. I’ll start with that, because it looks like they’ve just helped us produce yet another amazing innovation for the Xbox Kinect. In this case, it’s teleoperator software for controlling a robot! Stick around for the interview video after the fold.

Information is sparse, but at least we know his name and some spiffy details, brought to us by the robots dreams blog,

Taylor Veltrop, pretty much working on his own in a suburb of Tokyo, has accomplished very professional and noteworthy work in humanoid robotics including integrating the Willow Garage ROS system and the Roboard with a Kondo KHR-1HV; publishing detailed information enabling others to replicate and improve on his work in an Open Source fashion; and making tons of previously obscure information, like Kondo UART configurations, clear and easy to understand and work with. If that wasn’t enough, he’s also a high level LEGO Mindstorms robot designer, and recently qualified as an official participant in the Aldebaran NAO Robot Developer Program.

I would be amazed if NASA isn’t very interested in this sort of thing for telepresent operation of in-field robots, waldos, and armatures. Right now, most interfaces for these devices go through computer screens, involve complex joystick gizmos and sometimes interesting hands-on motion actuators that take into account finger and wrist movement through physical touch.

Taylor Veltrop participates in the Aldebaran NAO Robot Developer Program, which is an amazing repository for knowledge of robotics and Open Source development to pioneer innovation these sorts of projects. A controller of this type, using-skeletal sensing, could certainly revolutionize how remote operation of devices that translate human dexterity could be used.

The Microsoft Xbox 360 Kinect’s technology and its amazing ability to track the hands and even fingers—as seen in the production of software to aid in learning American Sign Language—could make it so that no external device is needed. A user could simply turn kick on the peripheral, reach over into its field of view, and begin remote manipulation.

Imagine doing telepresence work from Earth by controlling an armature in space to fix a space station, carefully cut and collect rocks on the moon—or, if you want something closer to the gravity well, think about the advances in microsurgery this sort of interface could provide.


Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.