Google is helping the New York Times expand its online article comments with an advanced tool designed to fight internet trolls.
Previously, readers were limited to commenting on about 10 percent of articles due to a having just 14 human moderators to review the online publication’s nearly 12,000 daily comments, Recode reported.
But now, the moderators are getting some much-needed assistance from a machine-learning algorithm that Google first first released back in February.
While no comment will be automatically rejected by the algorithm, the technology will allow human moderators to more quickly detect offensive hate speech, online harassment, or anything defined as “toxic,” thus helping them easily determine whether or not a comment is appropriate to post.
“It’s become too easy for trolls to dominate conversations online. People are either leaving the conversation entirely or comments sections are being shut down. The power of machine learning offers us an opportunity to tip the scales and reverse this trend,” Jared Cohen, CEO of Jigsaw, told Fortune. “This is why we built Perspective, technology that puts the power of machine learning into the hands of publishers and platforms to host better discussions online.”
According to Recode, with the help of the tool, moderators will now be able to screen around 25 percent of Times’ comments. And the publication reportedly set a future goal with hopes that 80 percent of all stories can eventually include comments.
To see Perspective’s comment filtering in action, check out the video below:
While this may be a small step for human moderators, it’s a positive leap for comment readers everywhere.