As Literal Nazis Abuse Twitter, CEO Asks How to Improve the Platform
Jack Dorsey took to his own account to ask for ideas to improve the service, but the main area of improvement is obvious.
Technology's latest thought exercise comes from none other than Twitter CEO Jack Dorsey, who tweeted this summons for constructive criticism yesterday:
Dorsey was apparently inspired by a similar tweet sent by Airbnb co-founder and CEO Brian Chesky earlier this week. And while the Twitter co-founder and top exec has done things like this before, yesterday's request felt especially loaded, considering the company's rough year dealing with harassment outbreaks, and targeted abuse by white supremacist groups and literal Nazis.
People responding to Dorsey's tweet posed a variety of suggestions, but many asked him to improve Twitter's current status quo for handling harassment. The company recently took baby-steps in this regard; pushing new features like a quality filter and better muting capabilities. But Twitter has yet to uniformly enforce broad-sweeping action against hate-speech or the spread of private information.
Several people offered Dorsey specific ideas for how he could make Twitter a safer, less hostile place—such as locked hashtags that make it easier to ban abusive users, a point system, or removing one's handle entirely from conversation threads—but Dorsey refused to say much more than the equivalent of "we're working on it."
I asked Twitter for more information regarding Dorsey's decision to solicit advice from the public, and a spokesperson for the company said they had no comment beyond the CEO's tweets. The company also declined to comment on why Dorsey was acknowledging some topics but not others.
For the most part, Dorsey seemed excited to address criticisms about Twitter's user experience. Requests for tools like bookmarking, list organization, topic searching, and tweet threading were met with optimistic replies. Dorsey said he's "thinking a lot" about tweet editing.
If you've ever submitted an abuse report on Twitter, you'll know the process is often performative. One of the most visible complaints regarding harassment is how infrequently action is taken when a single, regular user files an abuse report. Sometimes, Twitter's vetting processes can't even identify blatant hate-speech.
Earlier this year, BuzzFeed News conducted an informal survey of 2,700 Twitter users, and found that 29 percent of people reported receiving no response from Twitter after submitting an abuse report. Approximately 18 percent were reportedly told the abusive content they had flagged was actually allowed, according to Twitter's own guidelines.
What's perhaps more frustrating than inaction, however, is that users aren't privy to information that explains how their report was handled internally by Twitter. For example, we still have no idea who looks at abuse reports, how many are processed each day, and what criteria staff use to determine whether something qualifies as harassment.
More transparency from the company could reveal why hate-speech is often (anecdotally) permitted by Twitter's anti-harassment protocols. What's the diversity breakdown of Twitter's community management team? How are they trained to identify abuse? And how does this vary regionally?
Dorsey's behavior after asking for improvements mirrors Twitter's overall approach to dealing with its abuse problem: Yes, it's a "top priority," and yes, they're currently working on it. But it's complicated, and for the time being, we'll have to patiently wait.
Only, we know the company can do something about harassment on its platform—and right now. High-profile users have been known to receive immediate treatment if they complain to Dorsey directly. When the actress Leslie Jones was repeatedly attacked on Twitter in June, her experience resulted in the near-immediate suspension of Breitbart editor Milo Yiannopoulos.
Sometimes, individual abuse reports do result in consequences, although randomly, and extremely unreliably. As Motherboard has written about before, Twitter's anti-harassment tools are good, but they're not enough.
Just like Facebook, which confronted its position as a media company this year after a slew of News Feed fumbles and the unfortunate censorship of a historic war photo, Twitter should decide what it really wants to be: a pure free speech platform, or a platform where all communities can safely participate. It can't have both, but it can set rules and draw lines, and decide how to enforce them.
For now, I guess we can get excited about the hint that editable tweets are probably coming.