FYI.

This story is over 5 years old.

Tech

Twitter Can't Figure Out Its Censorship Policy

When it comes to hate speech, Twitter is still figuring out its rules.

New York Times editor Jon Weisman announced he was leaving Twitter last week, thanks "to the racists, the anti-Semites, the Bernie Bros who attacked women reporters yesterday." Enough was enough.

Here's what happened: In response to a rash of hatred on the site, Weisman's colleague Ari Isaacman Bevacqua (also a Times editor) reported accounts that used anti-Semitic slurs and threats to Twitter support. Twitter replied that it "could not determine a clear violation of the Twitter Rules," Weisman told me. It didn't make sense to him.

Advertisement

Weisman isn't alone. A Human Rights Watch director, a New York Times reporter, and a journalist who wrote about a video game have all reported a similar phenomenon. Still more confirmed the process independently to Motherboard. They each got what they perceived to be a threat on Twitter, reported the tweet to Twitter support, and received a reply that the conduct does not violate Twitter's rules.

When Twitter made new rules of conduct in January, the company gave itself an impossible task: let 310 million monthly users post freely and without mediation, while also banning harassment, "violent threats (direct or indirect)" and "hateful conduct." The fault lines are showing.

An update to reporting this tweet, as many suggested, to @twittersecurity. As usual, no action taken. pic.twitter.com/pJZ3e8Tq85
— Hiroko Tabuchi (@HirokoTabuchi) May 16, 2016

The hands-off response that Bevacqua received fits with the Twitter that CEO Jack Dorsey touts. Censorship does not exist on Twitter, he says.

But there's another side to Twitter, one with a "trust and safety council" of dozens of web activist groups. This side of Twitter developed a product to hunt down abusive users. It's the one that signed an agreement with the European Union last month to monitor hate speech. It's joined by Facebook, YouTube, and Microsoft in the agreement, and while it's not legally binding, it's the first major attempt to put concrete rules in place about how online platforms should respond to hate speech.

Advertisement

"There is a clear distinction between freedom of expression and conduct that incites violence and hate," said Karen White, Twitter's head of public policy for Europe.

What's not entirely clear, is how Twitter is going to enact this EU agreement, though it seems like the platform will rely on users reporting offensive content.

The internet has always been a breeding place for vitriol, but it's become much more present lately. Neo-nazis have been putting parentheses, or "echoes," around the name of a Jewish writer. Google Chrome recently removed an extension called Coincidence Detector that added these around writers' names. The symbol represents "Jewish power," because anti-Semites just can't give up on their theory that Jews are behind everything bad in history.

"What's new is for people to be able to deliver their hatred in such public and efficient ways."

From a practical standpoint, policing hate speech on a platform with 310 million monthly users is difficult. The "echoes" don't show up on a Twitter search or on a Google search. Twitter wants to be a place of open and free expression. But it also, at least according to a statement to the Washington Post, wants to "empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance."

"I would say that much of the anti-Semitism that is being spread on Twitter and other platforms is not new in terms of the messaging and content," Oren Segal, director of the Anti-Defamation League's Center on Extremism, told Motherboard. "What's new is for people to be able to deliver their hatred in such public and efficient ways."

Advertisement

The "echoes" symbol has extended to be a sign of racism as well. The symbol is common on Twitter even without the extension—some writers have put the marks on their names voluntarily, to reappropriate the symbol, but others use it for hatred. One user sent Weisman a photo of a trail of dollar bills leading to an oven.

Another user tweeted in reply: "well Mr. (((Weisman))) hop on in!" This user has a red, white, and blue flag with stars, stripes, and a swastika as his cover photo. It's a flag from the Amazon television series The Man in the High Castle, which depicts an America under Nazi control.

@jonathanweisman pic.twitter.com/iVqpMu22pO
— Timmy Norris (@AgentTimothy) May 19, 2016

Weisman reported these tweets to Twitter. The site didn't remove them. Some others, though, were removed. "Suddenly I get all these reports back saying this account has been suspended," Weisman told the Washington Post. "I don't really know what their decisionmaking is," he said. "I don't know what is considered above the line and what isn't."

"It's not like this echo or this parentheses meme was in and of itself the most creative and viral anti-Semitic tactic that we've seen," Segal said. "It's relevant because we've come to a time of more anti-Semitism online… It represented one element of a larger trend."

Twitter has taken action against accounts perceived as offensive in the recent past. Recently it suspended five accounts that parodied the Russian government, although now the most popular of these, @DarthPutinKGB, is back up. Since mid-2015, Twitter has suspended more than 125,000 accounts for promoting terrorism, a practice that picked up in 2013.

After the March terrorist attacks in Belgium, the hashtag #StopIslam was trending. Twitter removed it from the trending topics sidebar, although many instances of the hashtag were using it in a critical light.

Earlier this year, the platform revoked the "verified" status of Breibart personality Milo Yiannopoulos, who tweets provocative messages that have been described as misogynistic and as harassment. Yiannopoulos said he reached out to Twitter twice, but he never got an answer about why he was un-verified. The platform was frustratingly unresponsive, as users who reported offensive tweets found as well.

To Twitter's co-founder and CEO, bigotry is part of life. "It's disappointing, but it's reflective of the world," he said when Matt Lauer asked him about people who use the platform to "express anger and to hurt people and insult people." He reminded Lauer that users are free to block whomever they'd like, although he's never blocked anyone on his account.