UPDATED 13:34 EDT / MAY 11 2016

NEWS

Amazon says its new deep learning library is 2x faster than Google’s

Though Amazon Inc. doesn’t make a habit of sharing its internally-produced software with the outside world, the large number of fellow web giants that had open-sourced their deep learning technology in recent months has apparently prompted a change of heart. And so the company quietly joined the fray yesterday with the release of a C++ library for developing neural networks that could make the task significantly faster than before for data scientists.

Dubbed DSSTNE, the framework owes its speed in large part to the parallelization mechanism that Amazon included under the hood to handle distributed processing. Most alternatives execute deep learning models by running separate copies of the code on each GPU at their disposal and synchronizing the activity using some sort of orchestration mechanism. Others will assign each major element of the algorithm to a different chip, which is slightly more efficient but still doesn’t make the most out of the available hardware. DSSTNE, in turn, implements an improved variation of the latter method to figure out the optimal number of calculations a given processor can handle and then distribute the load accordingly.

The resulting performance improvement becomes especially pronounced when the framework is used to analyze so-called sparse datasets that are missing a lot of details. Amazon optimized  DSSTNE for handling partial information to help speed the creation of recommendation engines, which typically don’t have access to all the information they’re programmed to weigh. The product suggestion feature on the retail giant’s website, for instance, can’t take a visitor’s buying history into account if they’re not logged into their account. Meanwhile, search applications will be able to exploit the framework as well to deal with the semantic gaps that often appear in user queries.

Amazon also plans on adding support for image and speech recognition algorithms later down the road in an effort to broaden the appeal of DSSTNE even further. The library already poses a threat to existing deep learning engines: The retail giant claims that it outperformed Alphabet Inc.’s popular TensorFlow system by 2.1 times during an internal benchmark test.

Image via blickpixel 

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.