DDN and Nvidia partnership powers the AI data center
Businesses confronted with rolling out an artificial intelligence project often stare at the assembled pieces and wonder what kind of engine will be needed to power the car. Data scientists? Check. AI initiative? Check. Deployment infrastructure? Uh-oh.
DataDirect Networks Inc. has announced a partnership with Nvidia Corp. to make AI deployments simpler. DDN’s new reference architecture marries Nvidia’s DGX-1 AI servers with DDN’s parallel file storage systems.
“It is a full rack-level solution, a reference architecture that’s been fully integrated and fully tested to deliver an AI infrastructure simply and completely,” said Kurt Kuckein (pictured, left), senior director of marketing at DDN. “That’s what we’ve made easy with Accelerated, Any-Scale AI [A³I], to be able to scale that environment seamlessly within a single name space so that people don’t have to deal with a lot of tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need.”
Kuckein spoke with Peter Burris (@plburris), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, at theCUBE’s studio in Palo Alto, California. He was joined by Darrin Johnson (pictured, right), global director of technical marketing for enterprise at Nvidia, and they discussed how the latest solution will improve runtime use of deep learning tools, boosting productivity for data scientists, and the importance of streamlining data delivery for enterprise applications. (* Disclosure below.)
Shorter runtimes for deep learning
DDN has indicated that deep learning frameworks, such as Caffe or TensorFlow, will have shorter runtimes for image throughput when running on the Nvidia’s DGX-1 servers. The goal is to allow data scientists to focus on algorithms that will generate tangible benefits for the business rather than having to configure systems. Nvidia’s partnership with DDN was followed by the news today that it would launch a new acceleration platform for AI.
“Data scientists don’t want to understand the underlying file system, networking, remote direct memory access, InfiniBand, any of that,” Johnson said. “They just want to be able to come in, run their TensorFlow, get the data, get the result. This solution helps bring that to customers much more easily so those data scientists don’t have to be system administrators.”
DDN’s partnership with Nvidia is designed to offer customers end-to-end parallel architecture with the lowest latency and highest throughput for feeding critical data to enterprise applications.
“In the end, it’s the application that’s most important to both of us,” Kuckein said. “It’s making the discoveries faster. It’s processing information out in the field faster. It’s doing analysis of the MRI faster.”
Watch the entire video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s CUBE Conversations. (* Disclosure: DataDirect Networks Inc. sponsored this segment of theCUBE. Neither DDN nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.