Navigation
Search
|
Facebook Uses Machine Learning To Remove 8.7 Million Child Exploitation Posts
Thursday October 25, 2018. 12:00 PM , from Slashdot
Facebook announced today in a blog post that it has removed 8.7 million posts last quarter that violated its rules against child exploitation. The company said it used new AI and machine learning technology to remove 99 percent of those posts before anyone reported them. TechCrunch reports: The new technology examines posts for child nudity and other exploitative content when they are uploaded and, if necessary, photos and accounts are reported to the National Center for Missing and Exploited Children. Facebook had already been using photo-matching technology to compare newly uploaded photos with known images of child exploitation and revenge porn, but the new tools are meant to prevent previously unidentified content from being disseminated through its platform. The technology isn't perfect, with many parents complaining that innocuous photos of their kids have been removed. Davis addressed this in her post, writing that in order to 'avoid even the potential for abuse, we take action on nonsexual content as well, like seemingly benign photos of children in the bath' and that this 'comprehensive approach' is one reason Facebook removed as much content as it did last quarter. The tech isn't always right though. In 2016, it was criticized for removing content like the iconic 1972 photo of Phan Thi Kim Phuc, known as the 'Napalm Girl,' fleeing naked after suffering third-degree burns in a South Vietnamese napalm attack on her village. COO Sheryl Sandberg apologized for it at the time.
Read more of this story at Slashdot.
rss.slashdot.org/~r/Slashdot/slashdot/~3/ccKdf6CxgM4/facebook-uses-machine-learning-to-remove-87-mil...
|
25 sources
Current Date
Nov, Thu 21 - 21:36 CET
|