Google launches TensorFlow 2.0 with tools for building privacy-conscious AI
Google LLC today launched a new iteration of TensorFlow, its popular artificial intelligence framework, and a pair of complementary modules aimed at enabling algorithms to process user data more responsibly.
TensorFlow 2.0 focuses primarily on improving usability. The release brings a streamlined application programming interface based on Keras, an open-source tool designed to make AI development frameworks easier to use. It enables engineers to access features that were previously spread out across multiple APIs in one place and provides more options for customizing the development workflow.
Another key enhancement is the addition of support for so-called eager execution. TensorFlow 2.0 fires up AI models much faster than previous versions, which lets engineers try out different model variations with shorter delays between test runs. This has the potential to save a considerable amount of time given the highly iterative nature of machine learning development.
Yet even with the significant improvements in TensorFlow 2.0, it’s the two accompanying tools Google rolled out alongside the release that have drawn the most industry attention. They’re meant to help developers build privacy controls directly into their AI software to provide better protection of user information.
The first module, TensorFlow Privacy, enables machine learning models to discard potentially sensitive data they’re not supposed to process. It achieves that by automatically filtering input that is different from the information an algorithm typically ingests. An AI-based spell checking tool, for instance, would mostly take letters as input, which means long digit sequences such as credit card numbers could be easily identified and filtered.
“To use TensorFlow Privacy, no expertise in privacy or its underlying mathematics should be required: those using standard TensorFlow mechanisms should not have to change their model architectures, training procedures, or processes,” Google engineers Carey Radebaugh and Ulfar Erlingsson detailed in a blog post.
Google’s other new privacy module is called TensorFlow Federated. The software is aimed at the growing number of mobile services that rely on AI to support core features.
Because of mobile devices’ limited processing power, apps usually handle the learning aspect of machine learning by sending user data to a cloud-based backend for analysis. TensorFlow Federated enables apps to perform the analysis directly on the user’s handset. Developers can then collect the resulting insights and use them to improve their AI algorithms without having to access the underlying data, which increases privacy for consumers.
“With TFF [TensorFlow Federated], we can express an ML model architecture of our choice, and then train it across data provided by all writers, while keeping each writer’s data separate and local,” Alex Ingerman and Krzys Ostrowski, two of the engineers who helped develop the project, wrote in a separate post.
Much like TensorFlow itself, the new modules are available under an open-source license.
Image: Google
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.