Photo-matching tech will allow users to easily report intimate images, posted without consent.
Facebook is acting in a new bid to limit the amount of ‘revenge porn’ being shared and reposted across its contingent of social media platforms, in a move praised by campaigners and advocacy groups. The new measures will be rolled out across Facebook, Facebook Messenger, and Instagram, but not WhatsApp. Facebook will also be relying on users flagging inappropriate content via its ‘Report’ tool rather seeking to identify ‘porn’ images itself.
The new set of tools will allow users to easily report any intimate photos posted without consent that they see on the social network. The flagged pictures will then be assessed by “specially trained representatives” within Facebook who will “review the image and remove it if it violates community standards,” on Facebook.
Speaking to the BBC, Facebook’s head of safety said: “This is a first step and we will be looking to build on the technology to see if we can prevent the initial share of the content… These tools, developed in partnership with safety experts, are one example of the potential technology has to help keep people safe. Facebook is in a unique position to prevent harm, one of our five areas of focus as we help build a global community.”
If reported images are then judged to be revenge porn, the images will be removed and the account that posted it blocked, pending a potential appeal.
Facebook will be relying on the same type of photo-recognition software it currently uses to prevent child abuse imagery being shared. The software works by recognising previously banned images, and automatically blocking them instantly.
However, the social media giant can do little to stop the original ‘revenge’ images being posted the first-time round. Picture recognition software relies on being able to match images to those stored on databases before blocking them. Crucially as well, a dubious image must be flagged by users before action can be taken.