UPDATED 23:25 EDT / APRIL 17 2019

AI

Microsoft won’t share facial recognition technology with police, citing human rights concerns

While China doubles down on its efforts to track its citizens and concerns fester in the U.S. that a similar Orwellian super-surveillance might happen in the country, Microsoft Corp. may have just alleviated some of the concerns.

Reuters reported Tuesday that Microsoft had been asked by law enforcement in California to share its facial recognition technology. The company said no, saying it feared the artificial intelligence could breach human rights.

It appears that police wanted the AI installed in cars and on officer’s body cams, but according to the report, Microsoft said there was a good chance the software would be biased. The company added that because the tech had been trained on mostly white men, it could very well lead to a disproportionate number of women and minorities being taken in for questioning.

“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said at an event at Stanford University. “We said this technology is not your answer.” The conference’s topic was “human-centered artificial intelligence.”

Notwithstanding some of the populace fearing a future of omnipresent surveillance cameras, there have been many reports of facial recognition technology just getting it wrong. As for people of color, the technology has shown to produce false positives in minorities, prompting some to accuse tech firms of writing “racist code.”

If any firm has taken a beating for its willingness to export its sometimes-faulty facial recognition technology to authorities, it’s Amazon.com Inc. It was revealed last year that the company was selling  its Rekognition technology to law enforcement, prompting the American Civil Liberties Union and various civil-rights advocacy groups to accuse Amazon of helping to create a surveillance state in the U.S. Next month company shareholders will have a chance to vote to ban facial recognition development and embracing government regulation, though it’s likely to be largely symbolic.

Like many others in the tech industry, Smith has been outspoken in saying that there should be full transparency in the development of AI; that it’s shortcomings should be well documented and ethics should always take precedence over the bottom line.

Nonetheless, Smith did admit that Microsoft had supplied the prison system with the technology. He said this was only because the environment was limited and the company believed it could help reduce violence in prisons.

Smith said that without regulation and human rights in mind, developing facial recognition and similar AI was a frivolous game. He added that winning a race like this would only mean winning a “race to the bottom.”

Image: Justin Pickard/Flickr

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.