UPDATED 11:30 EST / NOVEMBER 19 2009

Deeper Dive on the $350 Sixth Sense

image While browsing the newly redesigned Engadget, I caught a story about the MIT projectSixth Sensewe talked about a bit on Tuesday.  I’ve since done a little bit more digging on the project and extended an invitation to the members of the project to talk more about what they’re working on here at SiliconANGLE (though I’m sure they’re particularly busy right now with all the flood of attention they’re getting, hopefully they’ll be able to make some time for us).

So I read through the whitepaper that MIT published on the project, and was able to get a bit more detail on what was running this setup, and the hardware details that make it so cheap and compelling.

image As I made my way through the paper, I was able to get a better look at the camera in some of the illustrations, and it was as I suspected, the Logitech QuickcamPro 9000 for Laptops (seen in the image to right).

It’s a quality camera, one that I’ve owned in the past, and is only second to the current camera I own (the non-laptop version of the same model). It’s perfect for this sort of application because not only is it high-resolution, but it is highly compact and easy to clip to just about anything. The camera, depending on where you get it from, runs between $60-150.

The projector I wasn’t particularly familiar with turned out to be a “3M MPro110,” according to the whitepapers. I’ve never used it before, but it appears to function fairly well in a variety of low to high lit situations (if the demo video is any guide), and can be gotten fairly cheaply – from $160-250.

It’s also worth noting that this thing relies on a laptop or portable computing device being tucked away in a pocket or a shoulderbag.  I believe during the TED demo, they said it used a mobile phone, but in the whitepaper, it talked about there being an attached laptop.

Of course, the secret sauce in this equation is the software, and the whitepaper offered very few clues as to what sort of gestural algorithms or command interpreter this was running off of.  I’ve interviewed a number of companies that work with gestural interfaces that run from camera input, and it all seems fairly proprietary and highly complex (not to mention cutting edge), so while it’s entirely possible they came up with this in the lab, I wouldn’t be surprised to hear that they’re licensing or experimenting with others’ technology.

My hope is that it’s home-grown MIT technology, and that they’d be interested in open-sourcing it to the community. Being able to load up some software like this and turn the world into my surface computing device seems like it’d just be loads of fun.

I’ve included the whitepaper I’ve pulled this info from below the jump, since I can’t seem to locate where I originally grabbed it from.


WUW – Wear Ur World


Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.