Google launches new AI-based tool to help combat child sexual abuse material
Google LLC today released a new artificial intelligence tool that aims to assist organizations in identifying and removing online child sexual abuse material.
The Content Safety API is a toolkit that uses deep neural networks for image processing to identify the material quickly while minimizing the need for human inspections. That’s a cumbersome process that often involves researchers having to go through thousands of images manually.
“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” Google engineering lead Nikola Todorovic and product manager Abhi Chaudhuri said in a blog post. “We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”
In testing, the tool is said to provide a great improvement in the speed of review processes of potential CSAM. Reviewer times, the time it takes to find and take action on material, improved by up to 700 percent.
VentureBeat reported that the announcement comes shortly after Google had been criticized by U.K. Foreign Secretary Jeremy Hunt for not doing enough, strangely drawing parallels with Google’s controversial decision to return to China with a censored search engine.
“Seems extraordinary that Google is considering censoring its content to get into China but won’t cooperate with U.K., U.S. and other 5 eyes countries in removing child abuse content,” Hunt wrote on Twitter. “They used to be so proud of being values-driven.”
In a canned statement, at least one group working in the area is positive about the announcement. “We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material,” said Susie Hargreaves of the Internet Watch Foundation, a U.K.-based organization that fights against abuse material. “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”
NGOs and similar organizations can get access to the tool via this form.
Image: Google
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.