UPDATED 21:15 EST / NOVEMBER 10 2016

NEWS

Google’s tiny Project Soli radar can now identify nearby objects

Google Inc. last year revealed its miniature Project Soli radar devices, which are so small that they can fit into mobile devices like phones and smartwatches to power “touchless interactions,” such as gesture controls. Now, they’re getting smarter.

Researchers from The University of St Andrews in Scotland have managed to use Project Soli’s radars not only to sense nearby objects, but also to recognize what those objects are. The researchers have named the new tool Radar Categorization for Input & Interaction, or RadarCat for short.

To create it, the researchers used the same principles behind computer vision, which uses artificial intelligence and machine learning to understand images. Computer vision has been used by companies such as Facebook Inc. and Google itself to describe what is occurring in a picture, including what sorts of objects are in the image and what actions are being done. For example, Google’s computer vision agent has been able to come up with very specific descriptions such as “a dog sitting on a beach next to a dog.”

RadarCat does not appear to be quite a smart as Google’s computer vision agent yet, but it has been trained using similar machine learning methods, which means that it will continue to get smarter over time. Currently, RadarCat can recognize a wide range of simple stationary objects such as fruits, office supplies and so on, but it can also understand more complicated concepts, such as the difference between an empty glass and a glass of water, as well as the difference between the front and back of a phone.

Because it relies on radar rather than images, RadarCat has a few capabilities that computer vision programs do not. For example, lighting has no effect on RadarCat’s ability to detect objects. More importantly, RadarCat can differentiate between objects that are visually identical but are actually made of different materials.

The researchers noted that RadarCat has a number of potential use cases, such as powering an “object dictionary” that can offer useful information about the objects placed on it, including nutritional information for foods or hardware specifications for electronics devices. RadarCat could also be used by the visually impaired to differentiate between objects that feel similar, such as in the picture below.

RadarCat bleach

Yes, the researchers from St Andrews really made that picture for a video demonstration of RadarCat. You can watch the full video showcasing RadarCat in action below:

Image courtesy of Google

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.