UPDATED 14:47 EST / DECEMBER 13 2017

INFRA

AI could fly to the IoT edge on time with FPGAs

Lugging all data from “internet of things” connected devices back to the cloud for processing may work in theory or testing but not so much when a developed product goes live. For a product to claim artificial intelligence, it must show its stuff with on-the-spot, instant inferences; there’s no time for trips back to the data center. This means edge hardware has to chip in on compute power.

“We need that compute in the data center, but we have to start pushing it out into the edge,” said Bill Jenkins (pictured), product line manager of AI for field programmable gate arrays, or FPGAs, at Intel. A new class of smarter edge hardware is now needed to compute that data. Sprucing up devices with flexible, programmable hardware like FPGAs can help them be all they can be, he added.

“We want to make those smarter so that we can do more compute to offload the amount of data that needs to be sent back to the data center as much as possible,” Jenkins said.

He spoke with Jeff Frick (@JeffFrick), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the Supercomputing event in Denver, Colorado. (* Disclosure below.)

FP (future proof) GAs

Much training of AI and machine learning models on big data takes place in the cloud or data centers — and that’s fine. “But now people are building products around it,” Jenkins said. That means that time-to-inference must be super short. In the case of autonomous vehicles, for instance, “where someone’s crossing the road, I’m not waiting two seconds to figure out it’s a person,” he added.

It also changes data scientists’ and developers’ outlooks on hardware at the edge. “They realize that they don’t want to compensate for limitations in hardware; they want to work around them,” Jenkins stated.

FPGAs are one route around those limitations. For instance, once a network is trained, people often go back to retrain and may find accuracy pleasing but performance wanting. “So then they start lowering the precision,” Jenkins said. Not ideal. FPGA’s flexibility allows them to adjust network technicalities without losing as much precision, he added.

And if FPGA users decide to go a different way later on, they can reprogram the chips. “So it gives you that future-proofing, that capability to sustain different typologies, different architectures, different previsions to kind of keep people going with the same piece of hardware without having to say, ‘Spin up a new ASIC [application-specific integrated circuit],'” Jenkins concluded.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the Supercomputing 2017 conference. (* Disclosure: TheCUBE is a paid media partner for the Super Computing 2017 conference. Neither Intel, the event sponsor, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.