Image: Chris Kindred

The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People

Moderating billions of posts a week in more than a hundred languages has become Facebook’s biggest challenge. Leaked documents and nearly two dozen interviews show how the company hopes to solve it.

|
Aug 23 2018, 5:15pm

Image: Chris Kindred

This spring, Facebook reached out to a few dozen leading social media academics with an invitation: Would they like to have a casual dinner with Mark Zuckerberg to discuss Facebook’s problems?

According to five people who attended the series of off-the-record dinners at Zuckerberg’s home in Palo Alto, California, the conversation largely centered around the most important problem plaguing the company: content moderation.

In recent months, Facebook has been attacked from all sides: by conservatives for what they perceive is a liberal bias, by liberals for allowing white nationalism and Holocaust denial on the platform, by governments and news organizations for allowing fake news and disinformation to flourish, and by human rights organizations for its use as a platform to facilitate gender-based harassment and livestream suicide and murder. Facebook has even been blamed for contributing to genocide.

These situations have been largely framed as individual public relations fires that Facebook has tried to put out one at a time. But the need for content moderation is better looked at as a systemic issue in Facebook’s business model. Zuckerberg has said that he wants Facebook to be one global community, a radical ideal given the vast diversity of communities and cultural mores around the globe. Facebook believes highly-nuanced content moderation can resolve this tension, but it's an unfathomably complex logistical problem that has no obvious solution, that fundamentally threatens Facebook’s business, and that has largely shifted the role of free speech arbitration from governments to a private platform.

The dinners demonstrated a commitment from Zuckerberg to solve the hard problems that Facebook has created for itself through its relentless quest for growth. But several people who attended the dinners said they believe that they were starting the conversation on fundamentally different ground: Zuckerberg believes that Facebook’s problems can be solved. Many experts do not.

*

Signs hanging on the wall of Facebook's Menlo Park, California headquarters. Image: Jason Koebler

The early, utopian promise of the internet was as a decentralized network that would connect people under hundreds of millions of websites and communities, each in charge of creating their own rules. But as the internet has evolved, it has become increasingly corporatized, with companies like Facebook, YouTube, Instagram, Reddit, Tumblr, and Twitter replacing individually-owned websites and forums as the primary speech outlets for billions of people around the world.

As these platforms have grown in size and influence, they’ve hired content moderators to police their websites—first to remove illegal content such as child pornography, and then to enforce rules barring content that could cause users to leave or a PR nightmare.

“The fundamental reason for content moderation—its root reason for existing—goes quite simply to the issue of brand protection and liability mitigation for the platform,” Sarah T. Roberts, an assistant professor at UCLA who studies commercial content moderation, told Motherboard. “It is ultimately and fundamentally in the service of the platforms themselves. It’s the gatekeeping mechanisms the platforms use to control the nature of the user-generated content that flows over their branded spaces.”

To understand why Facebook, Twitter, YouTube, and Reddit have rules at all, it’s worth considering that, as these platforms have become stricter, “free speech”-focused clones with few or no rules at all have arisen, largely floundered, and are generally seen as cesspools filled with hateful rhetoric and teeming with Nazis.

And so there are rules.

Twitter has “the Twitter Rules,” Reddit has a “Content Policy,” YouTube has “Community Guidelines,” and Facebook has “Community Standards.” There are hundreds of rules on Facebook, all of which fall into different subsections of its policies, and most of which have exceptions and grey areas. These include, for example, policies on hate speech, targeted violence, bullying, and porn, as well as rules against spam, “false news,” and copyright infringement.

Though much attention has rightly been given to the spread of fake news and coordinated disinformation on Facebook, the perhaps more challenging problem—and one that is baked into the platform’s design—is how to moderate the speech of users who aren’t fundamentally trying to game the platform or undermine democracy, but are simply using it on a daily basis in ways that can potentially hurt others.

Image: Bloomberg via Getty Images

Facebook has a “policy team” made up of lawyers, public relations professionals, ex-public policy wonks, and crisis management experts that makes the rules. They are enforced by roughly 7,500 human moderators, according to the company. In Facebook’s case, moderators act (or decide not to act) on content that is surfaced by artificial intelligence or by users who report posts that they believe violate the rules. Artificial intelligence is very good at identifying porn, spam, and fake accounts, but it’s still not great at identifying hate speech.

The policy team is supported by others at Facebook’s sprawling, city-sized campus in Menlo Park, California. (Wild foxes have been spotted roaming the gardens on the roofs of some of Facebook’s new buildings, although we unfortunately did not see any during our visit.) One team responds specifically to content moderation crises. Another writes moderation software tools and artificial intelligence. Another tries to ensure accuracy and consistency across the globe. Finally, there’s a team that makes sure all of the other teams are working together properly.

"If you say, ‘Why doesn’t it say use your judgment?’ We A/B tested that."

How to successfully moderate user-generated content is one of the most labor-intensive and mind-bogglingly complex logistical problems Facebook has ever tried to solve. Its two billion users make billions of posts per day in more than a hundred languages, and Facebook’s human content moderators are asked to review more than 10 million potentially rule-breaking posts per week. Facebook aims to do this with an error rate of less than one percent, and seeks to review all user-reported content within 24 hours.

Facebook is still making tens of thousands of moderation errors per day, based on its own targets. And while Facebook moderators apply the company's rules correctly the vast majority of the time, users, politicians, and governments rarely agree on the rules in the first place. The issue, experts say, is that Facebook's audience is now so large and so diverse that it's nearly impossible to govern every possible interaction on the site.

“This is the difference between having 100 million people and a few billion people on your platform,” Kate Klonick, an assistant professor at St. John's University Law School and the author of an extensive legal review of social media content moderation practices, told Motherboard. “If you moderate posts 40 million times a day, the chance of one of those wrong decisions blowing up in your face is so much higher.”

A common area at Facebook headquarters where Zuckerberg sometimes holds Q&A sessions. Image: Jason Koebler

Every social media company has had content moderation issues, but Facebook and YouTube are the only platforms that operate on such a large scale, and academics who study moderation say that Facebook’s realization that a failure to properly moderate content could harm its business came earlier than other platforms.

Danielle Citron, author of Hate Crimes in Cyberspace and a professor at the University of Maryland’s law school, told Motherboard that Facebook’s “oh shit” moment came in 2013, after a group called Women, Action, and the Media successfully pressured advertisers to stop working with Facebook because it allowed rape jokes and memes on its site.

“Since then, it’s become incredibly intricate behind the scenes,” she said. “The one saving grace in all of this is that they have thoughtful people working on a really hard problem.”

Facebook's solution to this problem is immensely important for the future of global free expression, and yet the policymaking, technical and human infrastructure, and individual content decisions are largely invisible to users. In June, the special rapporteur to the Office of the United Nations High Commissioner for Human Rights issued a report calling for “radical transparency” in how social media companies make and enforce their rules. Many users would like the same.

Motherboard has spent the last several months examining all aspects of Facebook’s content moderation apparatus—from how the policies are created to how they are enforced and refined. We’ve spoken to the current and past architects of these policies, combed through hundreds of pages of leaked content moderation training documents and internal emails, spoken to experienced Facebook moderators, and visited Facebook’s Silicon Valley headquarters to interview more than half a dozen high-level employees and get an inside look at how Facebook makes and enforces the rules of a platform that is, to many people, the internet itself.

Facebook’s constant churn of content moderation-related problems come in many different flavors: There are failures of policy, failures of messaging, and failures to predict the darkest impulses of human nature. Compromises are made to accommodate Facebook’s business model. There are technological shortcomings, there are honest mistakes that are endlessly magnified and never forgotten, and there are also bad-faith attacks by sensationalist politicians and partisan media.

While the left and the right disagree about what should be allowed on Facebook, lots of people believe that Facebook isn’t doing a good job. Conservatives like Texas Senator Ted Cruz accuse Facebook of bias—he is still using the fact that Facebook mistakenly deleted a Chick-fil-a appreciation page in 2012 as exhibit A in his argument—while liberals were horrified at Zuckerberg’s defense of allowing Holocaust denial on the platform and the company’s slow action to ban InfoWars.

Zuckerberg has been repeatedly grilled by the media and Congress on the subject of content moderation, fake news, and disinformation campaigns, and this constant attention has led Zuckerberg and Facebook’s COO Sheryl Sandberg to become increasingly involved. Both executives have weighed in on what action should be taken on particular pieces of content, Neil Potts, Facebook’s head of strategic response, told Motherboard. Facebook declined to give any specific examples of content it weighed in on, but The New York Times reported that Zuckerberg was involved in the recent decision to ban InfoWars.

Sheryl Sandberg. Image: Shutterstock

Sandberg has asked to receive weekly briefs of the platform’s top content moderation concerns. This year, she introduced a weekly meeting in which team leads come together to discuss the best ways to deal with content-related escalations. Participants in that meeting also decide what to flag to Zuckerberg himself. Sandberg formed a team earlier this year to respond to content moderation issues in real time to get them in front of the CEO.

Zuckerberg and Sandberg “engage in a very real way to make sure that we're landing it right and then for, you know, even tougher decisions, we bring those to Sheryl,” Potts said.

Facebook told Motherboard that last year Sandberg urged the company to expand its hate speech policies, which spurred a series of working groups that took place over six to eight months.

Sandberg has overseen the teams in charge of content moderation for years, according to former Facebook employees who worked on the content policy team. Both Zuckerberg and Sandberg declined to be interviewed for this article.

“Sheryl was the court’s highest authority for all the time I was there,” an early Facebook employee who worked on content moderation told Motherboard. “She’d check in occasionally, and she was empowered to steer this part of the ship. The goal was to not go to her because she has shit to do.”

CALMING THE CONSTANT CRISIS

Facebook’s Community Standards, which it released to the public in April, are largely the result of responding to many thousands of crises over the course of a decade, and then codifying those decisions as rules that can apply to similar situations in the future.

The hardest and most time-sensitive types of content—hate speech that falls in the grey areas of Facebook’s established policies, opportunists who pop up in the wake of mass shootings, or content the media is asking about—are “escalated” to a team called Risk and Response, which works with the policy and communications teams to make tough calls.

In recent months, for example, many women have begun sharing stories of sexual abuse, harassment, and assault on Facebook that technically violate Facebook’s rules, but that the policy team believes are important for people to be able to share. And so Risk and Response might make a call to leave up or remove one specific post, and then Facebook’s policy team will try to write new rules or tweak existing ones to make enforcement consistent across the board.

“With the MeToo Movement, you have a lot of people that are in a certain way outing other people or other men,” James Mitchell, the leader of the Risk and Response team, told Motherboard. “How do we square that with the bullying policies that we have to ensure that we're going to take the right actions, and what is the right action?”

A chalkboard inside Facebook headquarters that talks about building community. Image: Jason Koebler

These sorts of decisions and conversations have happened every day for a decade. Content is likely to be “escalated” when it pertains to a breaking news event—say in the aftermath of a terrorist attack—or when it’s asked about by a journalist, a government, or a politician. Though Facebook acknowledges that the media does finds rule-breaking content that its own systems haven’t, it doesn’t think that journalists have become de-facto Facebook moderators. Facebook said many reporters have begun to ask for comment about specific pieces of content without sharing links until after their stories have been published. The company called this an unfortunate development and said it doesn’t want to shift the burden of moderation to others, but added that it believes it’s beneficial to everyone if it removes violating content as quickly as possible.

Posts that fall in grey areas are also escalated by moderators themselves, who move things up the chain until it eventually hits the Risk and Response team.

"There was basically this Word document with ‘Hitler is bad and so is not wearing pants.’”

These decisions are made with back-and-forths on email threads, or huddles done at Facebook headquarters and its offices around the world. When there isn’t a specific, codified policy to fall back on, this team will sometimes make 'spirit of the policy' decisions that fit the contours of other decisions the company has made. Sometimes, Facebook will launch so-called “lockdowns,” in which all meetings are scrapped to focus on one urgent content moderation issue, sometimes for weeks at a time.

“Lockdown may be, ‘Over the next two weeks, all your meetings are gonna be on this, and if you have other meetings you can figure those out in the wee hours of the morning,” Potts, who coordinates crisis response between Mitchell’s team and other parts of the company, said. “It’s one thing that I think helps us get to some of these bigger issues before they become full-blown crises.”

Potts said a wave of suicide and self-harm videos posted soon after Facebook Live launched triggered a three-month lockdown between April and June 2017, for example. During this time, Facebook created tools that allow moderators to see user comments on live video, added advanced playback speed and replay functionality, added a timestamp to user-reported content, added text transcripts to live video, and added a “heat map” of user reactions that show the times in a video viewers are engaging with it.

“We saw just a rash of self-harm, self-injury videos go online,” he said. “And we really recognized that we didn't have a responsive process in place that could handle those, the volume. Now we've built some automated tools to help us better accomplish that.”

Facebook's first campus, viewed from the roof of its new campus. Image: Jason Koebler

As you might imagine, ad-hoc policy creation isn’t particularly sustainable. And so Facebook has tried to become more proactive with its twice-a-month “Content Standards Forum” meetings, where new policies and changes to existing policies are developed, workshopped, and ultimately adopted.

“People assume they’ve always had some kind of plan versus, a lot of how these rules developed are from putting out fires and a rapid PR response to terrible PR disasters as they happened,” Klonick said. “There hasn’t been a moment where they’ve had a chance to be philosophical about it, and the rules really reflect that.”

These meetings are an attempt to ease the constant state of crisis by discussing specific, ongoing problems that have been surfacing on the platform and making policy decisions on them. In June, Motherboard sat in on one of these meetings, which is notable considering Facebook has generally not been very forthright in how it decides the rules its moderators are asked to enforce.

Motherboard agreed to keep the specific content decisions made in the meeting we attended off the record because they are not yet official policy, but the contours of how the meeting works can be reported. Teams from 11 offices around the world tune in via video chat or crowd into the room. In the meetings, one specific “working group” made up of some of Facebook’s policy experts will present a “heads up,” which is a proposed policy change. These specific rules are workshopped over the course of weeks or months, and Facebook usually consults outside groups—nonprofits, academics, and advocacy groups—before deciding on a final policy, but Facebook’s employees are the ones who write it.

“I can't think of any situation in which an outside group has actually dealt with the writing of any policy,” Peter Stern, who manages Facebook’s relationships with these groups, said. “That's our work.”

In one meeting that Motherboard did not attend but was described to us by Monika Bickert, Facebook’s head of global policy management and the person who started these meetings, an employee discussed how to handle non-explicit eating disorder admissions posted by users, for example. A heads-up may originate from one of Facebook’s own teams, or can come from an external partner.

For all the flak that Facebook gets about its content moderation, the people who are actually working on policy every day very clearly stress about getting the specifics right, and the company has had some successes. For example, its revenge porn policy and recently created software tool—which asks people to preemptively upload nude photos of