FYI.

This story is over 5 years old.

Tech

Facebook’s Internal Moderator Policies Should Have Been Published Years Ago

It's taken a leak to the press to understand what Facebook really wants.
Surian Soosay/Flickr. Edited by author.

On Sunday evening, The Guardian published an investigation into Facebook's internal policies on moderating different types of content—ranging from self-harm, to terrorism, to violence against women.

In doing so, the newspaper pulled back Facebook's veil that shrouded the grey areas around controversies that have increasingly marred the social media site. But shouldn't the guidelines have been available for all to see anyway?

Advertisement

The newspaper said it had seen more than 100 internal training manuals used to teach some 4,500 Facebook moderators on what kind of content is acceptable on the site and what posts need to be removed.

For example: statements like "Someone shoot Trump" are to be removed by moderators, as Donald Trump is a head of state. Comments like, "To snap a bitch's neck, make sure to apply all your pressure to the middle of her throat" are allowed to stay though, because the threat is not seen as credible.

Brooklyn attorney Carrie Goldberg, who is fighting revenge porn one court case at a time, told Motherboard that the guidelines shouldn't have had to have been leaked to the press though—they should have been out in the open from the very start.

"These internal rulebooks should be public to begin with so that we can manage our expectations about what we can expect to see there and so that we can ditch the company if we don't like what they allow—or disallow," Goldberg told Motherboard in an email on Monday. "Like other social media companies, Facebook is disincentivized from properly staffing its safety departments because of federal law that says it is not liable for content that its users post."

That abdication of responsibility, along with the onus on the users to report problem content, is the biggest scandal here, Goldberg said, illustrating that "any one of us could be injured on Facebook—through revenge porn or a live streamed rape or fake news holding us out as a paedophile—and have no legal recourse against the company hosting the content and profiting from it being there."

Advertisement

But while Goldberg praised Facebook for being among the first to ban revenge porn, and applauded "passionate and conscientious individuals in Facebook's global safety department who are true innovators when it comes to doing good," she said that if Facebook isn't going to be sued for the harms users suffer on its platform, why is it bothering to put money toward policing it?

Read more: Facebook Has to Keep Apologizing for Its Business Model

"Almost since its inception, Facebook was too big to manage which is why users—and not staff—are responsible for reporting problem content," Goldberg said.

In a statement to Motherboard, a Facebook spokesperson said, "We're going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help."

But if we're to truly understand what we can or cannot tolerate on Facebook, we need to be clear on what is allowed—rather than wade through years of confusing and damaging backtracks.

Subscribe to Science Solved It, Motherboard's new show about the greatest mysteries that were solved by science.