Facebook announced today that it has removed 8.7 million pieces of content last quarter that violated its rules against child exploitation, thanks to new technology. The new AI and machine learning tech, which was developed and implemented over the past year by the company, removed 99 percent of those posts before anyone reported them, said Antigone Davis, Facebook’s global head of safety, in a blog post.

The new technology examines posts for child nudity and other exploitative content when they are uploaded and, if necessary, photos and accounts are reported to the National Center for Missing and Exploited Children. Facebook had already been using photo-matching technology to compare newly uploaded photos with known images of child exploitation and revenge porn, but the new tools are meant to prevent previously unidentified content from being disseminated through its platform.

The technology isn’t perfect, with many parents complaining that innocuous photos of their kids have been removed. Davis addressed this in her post, writing that in order to “avoid even the potential for abuse, we take action on nonsexual content as well, like seemingly benign photos of children in the bath” and that this “comprehensive approach” is one reason Facebook removed as much content as it did last quarter.

But Facebook’s moderation technology is by no means perfect and many people believe it is not comprehensive or accurate enough. In addition to family snapshots, it’s also been criticized for removing content like the iconic 1972 photo of Phan Thi Kim Phuc, known as the “Napalm Girl,” fleeing naked after suffering third-degree burns in a South Vietnamese napalm attack on her village, a decision COO Sheryl Sandberg apologized for.

Source: Facebook says it removed 8.7M child exploitation posts with new machine learning tech | TechCrunch