FYI.

This story is over 5 years old.

Tech

86 Organizations Demand Zuckerberg to Improve Takedown Appeals

Facebook is heeding criticism for its content takedown and account deactivation appeals process, which was initiated alongside the company’s expanding content moderation practices.
Text that reads "An Open Letter to Mark Zuckerberg."
Image: Screenshot from SantaClaraPrinciples.org

Over the past year, Facebook has been forced to expand its content moderation practices in response to human and civil rights groups, who have argued that the company is not doing enough to protect its marginalized users from hate speech. Now, Facebook has been subject to criticism alleging that the company enforces its content moderation policies inconsistently in a way that puts marginalized users at a disadvantage.

Advertisement

An open letter to Mark Zuckerberg signed by 86 organizations and published on Tuesday implores Facebook to provide a clear, fast mechanism that allows users to appeal instances of content takedowns and account deactivations. The letter‚ which was spearheaded by the Electronic Frontier Foundation, Article 19, Ranking Digital Rights, and the Center for Democratic Technology (CDT)—expanded upon the Santa Clara Principles published earlier this year, which called for all social media platforms to improve its transparency and responsiveness to flagged posts and appeals for removed content.

In April of this year, Facebook launched appeals for posts that are removed on grounds nudity, hate speech, or graphic violence. The press release claims that one of Facebook's human content reviewers will review all appeals within 24 hours, and notify users if their appeal has been approved or denied.

The Facebook appeals process was largely an attempt to balance the company’s quickly expanding content moderation process. For perspective, Facebook reportedly had about 4,500 content moderators in May of 2017, which grew to 7,500 human content reviewers in April of 2018.

There are multiple high-profile examples of Facebook content takedowns and account deactivations that have been described as erroneous. For instance, rapper Lil B was banned from Facebook for 30 days for making comments about white people that Facebook claimed violated the its hate speech policies, which the company described as "race neutral." Black organizers throughout the country have said that their accounts are frequently deactivated on Facebook both temporarily or permanently, and their content is often flagged and removed for violating its community standards.

Advertisement

In a phone call with Motherboard, EFF Director for International Freedom of Expression, Jillian York said that she believes the undercurrent of content moderation on social media is the censorship or restriction of speech towards marginalized groups.

“There are accounts, [and] there is content that is taken down frequently from social media, and we don’t hear those stories as much because they’re often overshadowed by the pushes for hate speech to come down,” York said. “I respect the people doing that work, I think it’s really important. But really, the thing about appeals is they work in every case. So if someone breaks the rules for hate speech and they appeal, they’re not gonna get their account restored. But if someone who should not have had their account taken down in the first place, appeals are the right solution to that.”

The open letter to Mark Zuckerberg also requests that all content takedown and deactivation appeals are reviewed by a human moderator, which Facebook claims that it already does. But of course, there is a human cost to human moderation. A former Facebook content moderator filed a class action lawsuit in September alleging that the job gave her PTSD.

In May of this year, Facebook published its first Community Standards Enforcement report. It claims that the company acted on 3.4 million pieces of flagged content in the first business quarter of 2018, in comparison to 1.2 million pieces of content from the last quarter of 2017. The report also claims that just 14% of the content that Facebook “took action on” was flagged by users, meaning 86% was flagged by Facebook human and AI content moderators. There is currently no public data regarding Facebook’s appeals process.

Advertisement

In an email to Motherboard, a Facebook spokesperson said that they take user appeals seriously. “These are very important issues,” the email reads. “It’s why we launched an appeals process on Facebook in April, and also published our first transparency report on our effectiveness in removing bad content. We are one of the few companies to do this – and we look forward to doing more in the future.”

According to a New York Times article published Wednesday, Donald Trump’s 2016 presidential election campaign prompted Facebook to reckon with its hate speech policies. The company it ultimately decided that Trump’s campaign points, although racist, had newsworthiness that outweighed its qualifications to be removed as hate speech.

But hate speech was not a novel issue for Facebook in 2016. Documents leaked to Motherboard reveal that the company has distinguished between expressions of white nationalism (which Facebook did not accept) and white separatism (which Facebook did accept); the company has announced that it is reviewing these policies. In countries such as Myanmar, a lack of content moderation to scale in the country’s native language let hate speech go largely unabated, fueling an ethnic genocide and misinformation spread by Myanmar’s own government officials.

While the company has taken steps to respond to these criticisms, it faces structural difficulties in its approach to content moderation. As investigated by Motherboard earlier this year, the company’s outlook is to have a set of rules and procedures for every possible manifestation of human communication on its platform. But new slurs, memes, and other forms of communication are created every day across all human languages. For this reason, it’s been immeasurably difficult for the company to adapt to the quirks of human communication while reaching its company mission connecting all users.

Yet the organizations that signed the open letter argue that Facebook’s appeals process, to date, has been insufficient. The CDT said in an email to Motherboard that content moderation at scale is a difficult job, and the a flawed appeals process is an unfortunate side effect of an effort to get hate speech and platform abuse under control.

“Erroneous and unfair removals affect journalists, advocates, artists, activists, and many more—but we typically only hear about the highest profile cases,” a CDT spokesperson said. “As platforms have ramped up their content takedown efforts in response to a variety of issues and pressures, we worry that they have not expanded their focus on appeals at the same rate.”

Considering that the appeals process that was only just initiated in April of this year, it’s possible that these flaws represent a growing pain alongside strengthened content moderation. York of the EFF told Motherboard that the crafters of the letter are in communication with Facebook and discussing how their recommendations can become a reality.