UPDATED 22:40 EDT / JULY 02 2019

AI

Intel expands its AI collaboration with China’s Baidu

Intel Corp. is working with Baidu Inc. to optimize its latest Nervana Neural Network Processor on the Chinese internet giant’s PaddlePaddle deep learning framework to try to speed up the training of artificial intelligence models.

Intel said its NNP-T processor, introduced late Tuesday, is a “new class of efficient deep learning system hardware designed to accelerate distributed training at scale.”

Meanwhile, PaddlePaddle, which stands for “PArallel Distributed Deep Learning,” is the main deep learning platform used by Baidu to power its AI services.

The chipmaker said its collaboration with Baidu, announced at the Baidu Create AI developer conference Tuesday, is important because AI has advanced to the point where it has become a “pervasive capability” that will be used to enhance just about every kind of computing application in smartphones, computers and data centers. As a result, there’s a need for AI-specific hardware such as the NNP-T chip to be optimized for the most commonly used frameworks.

“The next few years will see an explosion in the complexity of AI models and the need for massive deep learning compute at scale,” said Naveen Rao, corporate vice president and general manager of Intel’s AI Products Group. “Intel and Baidu are focusing on building radical new hardware, co-designed with enabling software, that will evolve with this new reality – something we call ‘AI 2.0.’”

The collaboration extends a partnership between the two firms that stretches back almost a decade. In recent years, the companies have partnered to optimize Intel’s previous-generation Xeon Scalable processors on Baidu’s PaddlePaddle framework, so the move to optimize NNP-T is a logical next step.

“Processor architectures and platforms need to be optimized for developers in order to be meaningful,” said Holger Mueller, principal analyst and vice president at Constellation Research Inc. “This is even more critical for new and upcoming AI architectures, and that explains why this partnership is important for Intel. But partnerships are one thing, real developer adoption is another, and so we will have to wait and see in a few quarters what kind of uptake this will yield.”

Intel has also worked with Baidu in the past to optimize its Optane DC Persistent Memory for Baidu’s AI framework, enabling the Chinese company to take advantage of its superior memory performance to deliver personalized content to users through its AI recommendation engine.

Not least, the companies are working together on something called “MesaTEE,” which is a “memory-safe function-as-a-service” framework that allows security sensitive services such as banking, autonomous driving and healthcare to process data more securely on platforms such as public cloud infrastructure and blockchains.

Photo: Isriya Paireepairit/Flickr

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.