Facebook has announced that its moderators have removed some 8.7 million images of child nudity in just the last three months.
The removal of the images was aided significantly by the use of previously undisclosed AI software Facebook has been using for the last 12 months that flags and removes such images automatically.
Thanks to the use of the AI, 99% of the 8.7 million images removed were said by Facebook to have been taken down before any Facebook user had reported them.
How did they do it?
The machine learning AI tool works by identifying images that contain both nudity and a child, and allow for a far more effective and efficient enforcement of Facebook’s ban on photos that show minors in any potentially sexualized context.
Speaking to the Reuter’s news agency, Facebook’s global head of safety Antigone Davis said that Facebook is also considering rolling out systems for spotting child nudity and grooming to Instagram as well. “The machines help us prioritize” said Davis, and help to “more efficiently queue” problematic content for human reviewers.
She said Facebook has also removed accounts that promote child pornography, and in the interests of safeguarding children, was also taking action on nonsexual content such as seemingly innocent photos of children in the bath.
What other action is Facebook taking?
“In addition to photo-matching technology, we’re using artificial intelligence and machine learning to proactively detect child nudity and previously unknown child exploitative content when it’s uploaded,” Davis concluded.
In an interview with the UK based Daily Telegraph,Facebook’s UK Public Policy Manager, Karim Palant, said that “the social network site removed 20 million images of adult nudity in the first three months of the year, as well as three million posts under hate speech rules.”
Facebook also announced earlier this year that it was increasing the number of human moderators reviewing content and by the end of 2018 will have around 20,000 people working directly in this area.