Google Brain researchers teach AI to make its own encryption
Researchers at Google Brain, Google’s deep learning project, have worked out a way to teach artificial intelligence neural networks how to create their own encryption formulas.
In a research paper published by Martín Abadi and David Andersen, “Learning to Protect Communications with Adversarial Neural Cryptography,” the researchers put three AIs together: two that attempt to communicate secretly (Alice and Bob) and a third that tries to spy on that communication (Eve).
According to the paper, Alice’s job is to construct messages using some sort of secret algorithm and Bob’s duty is to discover how to decrypt those messages. On the other side of the divide, Eve is listening in to the communication between Alice and Bob and attempts each time to read the message sent.
The objective of the entire process is to have Alice and Bob come up with a communication scheme that Eve cannot easily break, all without teaching Alice, Bob or Eve any particular encryption scheme.
The only thing traded between Alice and Bob was a pre-determined cryptographic key to which Eve did not have access. From there Alice would iterate through methods of encrypting a message to Bob and send it along. The success of that message being decrypted by Bob and not being successfully read by Eve would set the parameters for the next attempt.
The entire domain of the message was only 16 bits. That’s not a very long message, but it was sufficient for the simple encryption learning the researchers wanted to teach.
“Neural networks are generally not meant to be great at cryptography,” the researchers wrote. “Famously, the simplest neural networks cannot even compute XOR, which is basic to many cryptographic algorithms. Nevertheless, as we demonstrate, neural networks can learn to protect the confidentiality of their data from other neural networks: they discover forms of encryption and decryption, without being taught specific algorithms for these purposes.”
During the experiment, researchers discovered that Alice was able to devise methods for communicating secretly with Bob. However, Eve is not easily thwarted and the presence of that AI spying on the other two drives them to practice and enhance the secret messages.
At first, the AIs were not very good at sending messages to one another, but over time they got better. After 15,000 iterations, the researchers discovered that while Bob was able to decrypt the message every time, Eve was only able to decode 8 of the 16 bits. As each bit could only be a 1 or a 0 this was not sufficiently different from pure chance.
Human-made encryption is still far beyond the reach of the cryptography learned by the AI systems in this experiment. However, the way that encryption breaking is done—seeking out patterns in large sets of encrypted sent messages—is the mainstay of big data and machine learning apparatuses.
In an era when security on the Internet is built on the power of cryptography, powerful interests turning machine learning towards breaking encryption could become the next security arms race.
Image via Pixabay
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.