UPDATED 12:39 EST / FEBRUARY 09 2017

EMERGING TECH

DeepMind’s AIs can fight or cooperate – just like people

As artificial intelligence becomes more commonly used for everything from catching cancer early to describing images to the blind, what happens when two or more agents have overlapping goals? Will they battle for dominance or help one another out?

According to the Alphabet Inc.-owned DeepMind Technologies Ltd., the answer is both. DeepMind pitted two self-interested AI agents against one another in a simple 2D gathering game that involved collecting apples. Each of the agents had a special ability where one could fire a beam at the other AI to temporarily disable it, but neither AI received any direct reward for this action.

“Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can,” the DeepMind team explained in a blog post. “However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples.”

Interestingly, DeepMind also noted that agents that can employ more complex strategies were more likely to use their beam ability regardless of the number of apples available. In other words, a smarter AI was more likely to be aggressive rather than cooperative.

Image courtesy of DeepMind Technologies Inc.

This might sound like bad news for the future of humanity, but fortunately for our long-term survival, DeepMind discovered that the more complex AI could also be more cooperative in a different environment. In another experiment, DeepMind tested its agents in a game called Wolfpack. In this game, the agents played as wolves who had to work together to navigate a 2D environment while pursuing their prey.

If the AI wolves were close together, they would both receive credit for capturing the prey regardless of which agent actually got to it first. This meant that the agents would be more successful if they worked together to surround and trap the prey. Unlike the gathering game, complex AI agents were more likely to cooperate in Wolfpack than less complicated agents.

“So, depending on the situation, having a greater capacity to implement complex strategies may yield either more or less cooperation,” the DeepMind team explained.

DeepMind’s experiments are incredibly simplified compared with the real-world problems that AI agents are tackling every day, but the research team said that their findings show that AI can simulate how new policies could affect cooperation.

“As a consequence, we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on our continued cooperation,” the DeepMind team concluded.

For more in-depth information, you can read DeepMind’s full research paper about the experiments. You can also watch videos of DeepMind’s gathering and wolfpack games below:

Images courtesy of DeepMind

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.