Google say it won’t help weaponize AI, but it won’t quit working with the military
Google Inc. has tried to make it clear just how far it will go in helping the military with technology, following months of brouhaha over its involvement with the Pentagon in creating artificial intelligence surveillance to go through drone video footage.
The company lost employees over the Project Maven work, which has become a bone of contention given that the former motto at Google was, “Don’t be Evil.” Some people pointed out that working with the military was in fact going against this principle, and so Google promised it would create ethical guidelines to let people know where it stood.
Google now has made good on this promise, publishing its AI objectives on Thursday, which not surprisingly tried to put the company in a good light. The technology, says Google, will be used for the common good, to predict natural disasters, to diagnose disease, to prevent blindness.
“As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides,” said Google Chief Executive Sundar Pichai.
The company didn’t expressly say if it would quit developing AI to go through hours of drone footage for the military. The crux of the issue had been that while such technology is “nonoffensive,” drones drop bombs on people and so it can hardly be said to be unrelated to the spilling of blood.
In the guidelines Google did say that it would not create AI that could be used to harm people. “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” said the company, adding that this included “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
As far as Project Maven is concerned, Google also said it won’t develop technology that is used to “gather or use information for surveillance violating internationally accepted norms.” Does this mean the deal is off? It’s not entirely clear.
“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” said Google. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”
Speaking to The Verge a Google spokesperson was vague, stating that the company wouldn’t work on AI surveillance projects if they violated “internationally accepted norms.” Google will honor its contract with the Pentagon until 2019, it’s reported. It’s also said the contract, reportedly worth $10 billion, was sought after by Microsoft Corp., IBM Corp. and Amazon.com Inc.
Image: Rennett Stowe via Flickr
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.