UPDATED 12:44 EDT / JUNE 13 2017

EMERGING TECH

New York Times moderates comments using Google’s AI

In a perfect world, comment sections on news sites would be a great place to have rational discussions about what is happening in the world. Unfortunately, we do not live in a perfect world, and bitter disagreement and name calling often lead many sites to choose to lock comments entirely rather than deal with them.

The New York Times is looking to solve that problem through the use of artificial intelligence. Today, the Times is rolling out a new AI comment moderator built using Perspective, an AI application programming interface built by Jigsaw, which is a think tank that was spun out of Google Inc. parent Alphabet Inc. According to the Times, its new AI, which is simply called “Moderator,” will allow the site to open up comments for more articles.

The Times said in a statement that because of the labor-intensive process of manually moderating comments, it previously allowed comments on only about 10 percent of its articles. Now with the launch of Moderator, the publication says that it will be able to allow comments on 25 percent of its articles, and it hopes to eventually expand comments to up to 80 percent or more of its articles.

Rather than automating all of the moderation process, Moderator instead uses machine learning to help the Times’ human employees moderate more comments in less time. Using Google’s Conversation AI, Perspective rates comments on how likely they are to be considered toxic. The AI looks at a wide range of factors in each comment, including profanity, racial slurs and other inflammatory language. In addition to the Times, Perspective is also used for moderation on sites like Wikipedia and The Guardian.

AI moderation has become a growing trend in online communities, which until recently have had to rely on human moderators to keep tabs on discussions. For example, Amazon’s Twitch uses a similar tool called AutoMod to help live streamers get a handle on their chat rooms.

“League of Legends” developer Riot Games also uses AI and machine learning to limit toxicity in online games and fight player abuse. Riot’s AI flags potentially toxic messages and automatically warns users that their behavior is out of line and they could face penalties if they continue.

At Game Developers Conference 2015, Jeffrey Lin, former lead designer at Riot, said that not only does the AI restrict the amount of toxic content in the game, it also helps reform toxic players by calling out their behaviors. According to Lin, player reform occurs 50 percent of the time if they given the exact reason they are being banned, and he said that the reform rate rises to 70 percent if the player is also given evidence of the behavior.

The AI moderator used by the Times does not play any role in alerting users to their behaviors, but rather it makes it easier for human moderators to spot toxic comments. The Times’ human moderators make the final decision on what to do about those comments, which most likely would be simply deleting the comment and possibly banning the user.

Photo: HaxorjoeOwn work, CC BY-SA 3.0, Link

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.