Google is moving compute intelligence to the edge with new offerings
By its very nature, the multitude of devices that make up the internet of things function better on the edge of cloud computing, pushing out analytics and knowledge generation away from the central data center. This allows for much quicker response times and communications, a vital feature in a field always pressing to lower latencies across the board.
At the Google Cloud Next event this week, Google LLC introduced two new products specifically for edge compute. The first is Cloud IoT Edge, a software stack that can run on gateway devices, cameras, or any connected device that has compute capabilities. The second product is Edge TPU, a high-performance chip that can run machine-learning inference on the edge device itself.
“With the combination of Cloud IoT Edge as a software stack and with our Edge TPU, we think we have an integrated machine learning solution on Google Cloud Platform,” said Indranil Chakraborty (pictured), product lead, IoT, at Google Cloud.
Chakraborty spoke with John Furrier (@furrier) and Jeff Frick (@JeffFrick), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the Google Cloud Next event in San Francisco. In addition to discussing Google’s commitment to supporting IoT, they spoke about IoT connectivity challenges. (* Disclosure below.)
Improving productivity, even when a device isn’t connected 24×7
LG CNS was looking to improve factory productivity. It built a machine-learning model to detect defects on its assembly line using cloud machine-learning engine. The company enlisted one engineer and gave him a couple of weeks to train the model on cloud. Now with Cloud IoT Edge and the Edge TPU, the company can run that trained model locally on the camera itself, so they can do real-time defect analysis on a rapidly moving assembly line.
One of the continuing challenges of IoT is when sensors are located, for example, on windmill farms or in oil wells, where connectivity may be limited and operations aren’t reliable. As long as there’s enough connectivity to download some of the updated model or latest firmware and the software, it’s possible to run local compute and local machine learning inference on the edge itself, Chakraborty explained.
“So you can train in the cloud, push down the updates to the edge device, and you can run local compute and intelligence on the device itself,” he concluded.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the Google Cloud Next event. (* Disclosure: Google Cloud sponsored this segment of theCUBE. Neither Google Cloud nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.