Can NVMe storage fix IoT edge computing before the 5G network can?
Keynote speakers at Pure//Accelerate 2017 in San Francisco, California, waxed on topics far beyond the humble storage array — namely artificial intelligence and edge computing — claiming advanced storage can bring them out of techies’ dreams and into enterprise applications.
“The confluence of massive sensor deployments and the incredible power of GPU-enabled deep learning is truly changing the world,” said Brian Gold, engineering director at Pure Storage Inc.
Exciting as this may be, though, lagging network speed threatens to dampen some of these possibilities — particularly those involving sensor data and its computation in the cloud, he said.
With the 5G network on the horizon, Pure is asking whether new flash storage technology might instead compute the precious data on the spot at the edge.
Storage might enable a novel network located at endpoints — “an intelligent edge capable of storing and processing raw, unstructured data and shipping only the relevant high value content back to the cloud,” Gold said.
Pure has collaborated with Google to build a seamless analytics pipeline from the edge to the cloud, which the company will be demonstrating as the conference rolls on.
NVMe only choice for serious data?
Pure’s new storage products are able to accomplish compute at the edge and other feats thanks in large part to Non-Volatile Memory Express technology, according to Matt Burr, Pure’s vice president of sales. NVMe is crucial to Internet of Things edge computing and deep learning, because of the massive data that feeds those technologies, he explained.
“We need technology like NVMe, because as you cross 10, 15 terabytes, SSDs with legacy protocols behave like mechanical disks,” said Scott Dietzen, chief executive officer of Pure Storage. NVMe’s superior speed and performance is therefore needed to process those huge amounts of data, he said.
Pure is announcing a product featuring NVMe over Fabric and also, in partnership with Cisco, an all-NVMe converged infrastructure stack.
“We are super excited about taking a leadership position here and making all-NVMe storage mainstream,” said Jason Nadeau, director of business value marketing at Pure. SSDs have reached a “performance density crisis,” he added, with IOPS (input/output operations per second) per terabyte for larger SSDs actually dropping below that of disk.
Pure’s new FlashArray//X with NVMe, features an all-flash architecture called DirectFlash that eliminates all remaining disk error performance bottlenecks, Nadeau stated.
With flash storage now milliseconds away from the compute — as close to the CPU as devices in the same chassis — some customers now display what Dietzen calls “density lust.”
“We have enterprise customers that have driven density up by 100 fold — 20 racks of legacy storage replaced by one 4RU FlashBlade,” he said.
Bringing storage closer to the compute allows massive data to be crunched with the new deep learning models, Dietzen added.
GPUs and storage power AI compute
“Deep learning upends the pecking order, and data trumps programming. Algorithms are rather now inferred from the data, as deep neural nets that are capable of abstraction mine the insights,” Dietzen said.
Picking up this thread, Rob Ober, Tesla chief platform architect at Nvidia Corp., spoke about the crucial elements enabling deep learning.
Nvidia and Pure are collaborating on parallel computing using graphics processing units (GPUs) for artificial intelligence. Deep learning models require massive data for training, extreme compute power and neural nets, Ober explained. These three resources have only come available in adequate quantities in the past few years. So far AI and deep learning have been used mostly for sales intelligence for similar ads, ad placement and so on, he added.
Wall Street, unbeknownst to many, also uses deep learning to drive business, Ober stated. “It’s a black art — they’re very quiet, but a lot of things that go on in Wall Street are deep learning-based,” Ober said.
For a glimpse at the latest innovative AI use cases, Ober recommended Andreessen Horowitz’ website aiplaybook.a16z.com.
Stay tuned for the complete video interview, and be sure to check out more of SiliconANGLE’s and theCUBE’s independent editorial coverage of Pure//Accelerate 2017. (* Disclosure: TheCUBE is a paid media partner for Pure //Accelerate 2017. Neither Pure Storage Inc. nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.