UPDATED 14:10 EDT / OCTOBER 04 2017

EMERGING TECH

Following health data controversy, Google’s DeepMind forms AI ethics unit

Criticized for a controversy over the improper use of health data, DeepMind Technologies Ltd., Google LLC’s artificial intelligence research division, is expanding the scope of its work beyond the technical realm.

The U.K.-based group this morning announced the launch of a new team tasked with exploring the ethical challenges that accompany the spread of AI. The unit is headed by Verity Harding, who previously led public policy for Google’s European business, and technology consultant Sean Legassick. DeepMind is looking to triple the practice’s current headcount of eight over the next year.

The team will be advised by an outside group of “DeepMind Fellows” that is set to include economists, philosophers and other experts whose area of focus touches upon the AI discussion in one way or another. There are also plans to collaborate with universities pursuing a similar line of research.  

On top of creating ethical guidelines for DeepMind’s work, the unit will try to predict the ways AI could reshape society in the future. The effort is set to emphasize big questions such as how to ensure that AI systems will uphold user rights and what economic impact they’ll have. DeepMind expects that the team will start publishing its first research papers sometime next year.

With that said, the move to establish the unit is much more than just academic in nature. Earlier this year, DeepMind came under fire for a project with the U.K.’s National Health Service that violated regulations on processing patient records. An in-house ethics team could help steer the division away from misusing its AI technologies in the future.

In the long run, research produced by the new unit could benefit other Google groups as well. The search giant’s Waymo autonomous driving subsidiary is a prime candidate. The prospect of self-driving cars hitting the road en masse has raised thorny questions, such as how a vehicle should handle the difficult choices that must be made when an accident is unavoidable.

Many ethical blind spots still remain even as Google and other tech companies race ahead with their AI ambitions. Just a few months ago, DeepMind shared the results of a project that sought to train neural networks to think more like a human by having them learn the basics of walking under simulated conditions.

Image: DeepMind

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.