FYI.

This story is over 5 years old.

Tech

Leaked Documents Show Facebook’s Post-Charlottesville Reckoning with American Nazis

After a white supremacist killed a protester in Charlottesville in 2017, Facebook pushed to re-educate its moderators about hate speech groups in the US, and spell out the distinction from nationalism and separatism, documents obtained by Motherboard show
March
Image: Getty Images

This piece is part of an ongoing Motherboard series on Facebook's content moderation strategies. You can read the rest of the coverage here.

“James Fields did nothing wrong,” the post on Facebook read, referring to the man who drove a car through a crowd protesting against white supremacy in Charlottesville in August 2017, killing one. The post accompanied an article from Squawker.org, a conservative website. In training materials given to its army of moderators, Facebook says the post is an example of content “praising hate crime,” and it and others like it should be removed.

Advertisement

But after Charlottesville Facebook had something of an internal reckoning around hate speech, and pushed to re-educate its moderators about American white supremacists in particular, according to a cache of Facebook documents obtained by Motherboard.

The documents provide more specific insights into how Facebook views and classifies white supremacy and neo-Nazis, and how those views have evolved, all as American hate speech establishes itself as a huge problem on the social network and other platforms.

“Recent incidents in the United States (i.e. Charlottesville) have shown that there is potentially confusion about our hate org policies and the specific hate orgs in specific markets,” a training document for moderators created shortly after the protest, and obtained by Motherboard, reads.

Got a tip? You can contact this reporter securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.

One of the training documents includes a log of when Facebook has modified the material, including adding new examples of hate speech as the network identifies them. In November 2017, trainers added comparing Mexican people to worm like creatures to the document as an example of hate speech; in December they added an offensive comparison between Muslim and pigs; and in February Facebook trainers updated the material to mention users calling transgender people "it."

Advertisement

In January, five months after Charlottesville, Facebook added slides discussing the company’s position on white nationalism, supremacy, and separatism. While it says Facebook does not allow praise, support, or representation of white supremacy, it does allow the same sort of positions for white nationalism and separatism, according to one of the slides obtained by Motherboard.

Explaining its motivation, another section of the document reads that nationalism is an “extreme right movement and ideology, but it doesn't seem to be always associated with racism (at least not explicitly).” Facebook then acknowledges that “In fact, some white nationalists carefully avoid the term supremacy because it has negative connotations.”

1527181444223-whiten

Caption: A section of a training slide describing Facebook's policies towards white supremacy, nationalism, and separatism. Image: Motherboard

But despite spelling out these distinctions, Facebook admits in the training materials that the difference between them is not always clear cut.

"Overlaps with white nationalism/separatism, even orgs and individuals define themselves inconsistently," one slide says, in a section marked "challenges" for white supremacy.

Another slide asks “Can you say you’re a racist on Facebook?”.

“No,” is the response. “By definition, as a racist, you hate on at least one of our characteristics that are protected.”

Facebook classifies hate groups, individuals, and high profile figures based on “strong, medium, and weak signals,” according to one of the documents focused on hate speech in America. A strong signal would be if the individual is a founder or prominent member of a hate organization (or, “h8 org”, in Facebook parlance); medium would include the name or symbol of a banned hate group, or using dehumanizing language against certain groups of people. Partnership or some form of alliance with a banned hate organization—including participating in rallies together, of particular relevance to events like Charlottesville—Facebook sees as a weak signal, as well as an individual receiving a guilty verdict for distributing forbidden propaganda material.

Advertisement

Facebook confirmed to Motherboard in a statement that it evaluates "whether an individual or group should be designated as a hate figure or organisation based on a number of different signals, such as whether they carried out or have called for violence against people based on race, religion or other protected categories."

1527181606632-signals

Caption: A section of a Facebook training manual describing the company's various "signals" for hate groups. Image: Motherboard

In its policy clarification document around hate groups in America, Facebook specifically points to the Ku Klux Klan (KKK), United Klans of America, Aryan Nations, and several other groups that are either based in or are popular in the US. Another document, dated April of this year, includes many other white supremacist organizations from around the world, including Atomwaffen Division, a neo-Nazi group linked to several murders in the US. Another document explicitly says that Facebook does not consider every organization the Anti-Defamation League (ADL) flags a hate group as such. (In its statement Facebook said "Online extremism can only be tackled with strong partnerships which is why we continue to work closely with academics and organisations, including the Anti-Defamation League, to further develop and refine this process.")

Facebook has increasingly dealt head on with hate speech in recent months, sometimes with mixed results. In December, Facebook admitted to Pro Publica the social network had made mistakes on nearly half of a sample of potentially offensive posts. This month, Facebook accidentally launched a new feature early that would let users flag content for potentially containing hate speech.

In April, Facebook released a selection of rules for when it takes down content, including hate speech. VP of Global Product Management Monika Bickert told reporters that “There’s been a lot of research about how when institutions put their policies out there, people change their behavior, and that’s a good thing.” Facebook did release a sketch of its moderation policies in April, but the material obtained by Motherboard is more granular.

"Our policies against organised hate groups and individuals are longstanding and explicit—we don't allow these groups to maintain a presence on Facebook because we don't want to be a platform for hate. Using a combination of technology and people we work aggressively to root out extremist content and hate organisations from our platform," Facebook added in its statement.

Update: The piece has been updated to include a statement from Facebook.