FYI.

This story is over 5 years old.

Tech

Actually, Facebook Will Not Blur the Nudes Sent to Its Anti-Revenge Porn Program

The company originally told me it was obscuring the images, but then said it wasn’t.
Image: Shutterstock

Earlier this week, I reported on a new pilot program designed to combat revenge porn that Facebook is testing in Australia. The program involves sending nude photographs to the social network ahead of time, to prevent their potential spread in the future.

Security researchers and journalists—including me—had additional questions about exactly how it works, so I reached out to Facebook to ask. I then wrote a follow-up piece with more details the company provided, most important of which was that actual humans will review the explicit photos sent to the social network manually.

Advertisement

In my conversations with Facebook however, I discovered that the initial way the anti-revenge porn feature was described to me was incorrect. The Facebook spokesperson I talked to misrepresented a key detail of the program, and then later confirmed to me that the feature did not work the way that they said that it would. Facebook said that the nude images would be blurred when they were reviewed by humans, but that is not the case.

This is how the anti-revenge porn pilot program is going to work in Australia, according to a blog post Facebook published Thursday: First, users file a report with the country’s eSafety Commissioner’s office, saying that they want to preemptively report photos as being explicit, to prevent their spread in the future as revenge porn. The user is then asked to send the photos to themselves via Facebook Messenger. Facebook is made aware of the report, and then an actual human, a member of Facebook’s Community Operations team, looks at the nude photograph to ensure that it violates Facebook’s standards.

This is where a Facebook spokesperson initially described the process incorrectly. When I first reached out to ask how the program would work, I was told that the images would be blurred. This was repeated to me twice.

Facebook’s blog post didn’t mention blurring at all, and what I was told was contradictory to other reporting, so I reached out to Facebook again, in the hopes of clearing up the confusion. Finally, I was told that there will actually be no blurring. This means that Australians who want to use the new program have to voluntarily decide they are comfortable with Facebook’s Community Operations team seeing them naked. “To prevent adversarial reporting, at this time we need to have humans review the images in a controlled, secure environment,” Facebook’s Chief Security Officer Alex Stamos said in a tweet.

Facebook is having human reviewers be part of the process in order to prevent legitimate images from being inadvertently tagged as revenge porn. As recent studies have shown, image-recognition algorithms are still extremely easy to spoof.

After the images are reviewed by a human, what is referred to as a “hash” or unique digital fingerprint is built. Facebook does not retain the images themselves, just the hashes. Once it has created a hash, it notifies the person who uploaded the original image, and they are asked to delete it from Messenger. Then Facebook deletes the image from its servers, retaining only the hash.

Each time a user subsequently uploads an image, it’s tested against Facebook’s database of hashes. If one matches a hash labeled revenge porn, Facebook stops the user from posting it. As Stamos pointed out in his tweets, this is an imperfect solution to an incredibly difficult problem.

It’s unclear why this happened, but I think it’s important to highlight the confusion. Major tech corporations like Facebook are incredibly careful about how they message themselves to the public, and this is an instance in which it initially got the communication about its own program wrong.