Google’s AI beat a professional Go player, and it’s kind of a big deal
AlphaGo, an artificial intelligence program developed by Google’s DeepMind team, has become the first AI to successfully beat a professional Go player, a first for the field of AI design.
The actual rules of Go are relatively simple, yet designing an AI to actually play the game at the same level as the top players has proven extremely difficult, primarily due to the insane number of choices possible in Go.
“As simple as the rules are, Go is a game of profound complexity,” Demis Hassabis, CEO and co-founder of DeepMind, said on Google’s blog. “There are [1 * 10171] possible positions—that’s more than the number of atoms in the universe, and more than a googol times larger than chess.”
He added, “This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to artificial intelligence (AI) researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans.”
AlphaGo played (and mostly won) hundreds of games against other Go-playing AIs, and the program was finally tested against Fan Hui, the reigning three-time European Go champion. AlphaGo won all of the five games it played against its human opponent, which is the first time a computer has ever beaten a Go champion.
The next test for the AI will pit it against Lee Sedol, whom Hassabis called “the top Go player in the world over the past decade.”
Why it matters
Teaching a computer to play Go may not seem like a big deal, but it is a major breakthrough for the field of artificial intelligence. Perhaps the most important takeaway is that AlphaGo was not told how to play Go—it actually taught itself.
“We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent),” Hassabis explained.
“But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.”
Hassabis noted that all of that trail-and-error learning required an immense amount of computing power, and in AlphaGo’s case, the AI took advantage of Google Cloud Platform.
“While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems,” Hassabis concluded. “Because the methods we’ve used are general-purpose, our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems, from climate modelling to complex disease analysis. We’re excited to see what we can use this technology to tackle next!”
Google published a full report on the methodology behind AlphaGo in the scientific journal Nature.
Photo by chadmiller
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.