Here's Twitter's Response to the Racist Harassment of Leslie Jones
Once again, it’s taken a celebrity to show just how bad Twitter's anti-harassment tools are for everyone.
Image: YouTube/Sony Pictures Entertainment
Leslie Jones, one of the stars of the new Ghostbusters movie, spent most of Monday night swatting off an anonymous tirade of Twitter eggs who flooded her mentions with racist and sometimes violent tweets. This morning, after several hours of reporting and blocking users—the only available means of "preventing" harassment on Twitter—the actress gave up, and announced she would be leaving the site.
Twitter's anti-harassment mechanisms are still bad, and once again, it's taken the targeting of a celebrity to magnify just how bad they are for everyone. And though we've been here before, the simple question still remains: Why doesn't Twitter do anything about it?
There's certainly no shortage of suggestions from experts that Twitter could tap into. In 2015, the Electronic Frontier Foundation published some thoughts and measures the company could adopt to better protect its users. Included were new policies and tools, but also guidances for how to define abuse (for example, "trolls" are not the same ilk as abusers), how not to police content, and how to expand transparency. Another option would be to hire a staff of full-time mods whose sole job would be finding and fielding harassment.
One proposal that's popped up time and time again is the ability to block all replies to a single tweet, or even block replies from users you don't follow. Twitter is public, but should harassment and openness be mutually inclusive?
Currently, when a user like Jones—or you or I, for that matter—wants to stop someone from tweeting truly repugnant shit at them, there are two options at hand: block them, or report them.
The first strategy prevents a user from being able to see your tweets or follow you when they're logged in. However, according to Twitter's own rules, blocking "only works if the account you've blocked is logged in on Twitter. For example, if the account you've blocked isn't logged in or is accessing Twitter content via a third party, they may be able to see your public tweets." This also doesn't necessarily stop someone from creating a new account to contact you, or showing up in your mentions.
Twitter's reporting feature, which allows users to flag behavior that violates their guidelines, isn't much better. For starters, it can take a significant amount of time to submit a complaint. In some cases, Twitter asks that you provide examples of abusive content, which in theory makes sense, but in practice can be difficult when you're being dogpiled by hundreds of persistent users, such as Jones was. And once you actually file the complaint, reviewers may or may not approve your request. A third-party audit conducted by Women, Action, & the Media in 2015 found that when people reported allegedly abusive users, Twitter took action 55 percent of the time, and very rarely deleted offending accounts.
I reached out to Twitter and asked whether the company had plans to implement new tools for combating harassment. A spokesperson offered the following statement (the same one that was first provided to BuzzFeed News last night):
"This type of abusive behavior is not permitted on Twitter, and we've taken action on many of the accounts reported to us by both Leslie and others. We rely on people to report this type of behavior to us but we are continuing to invest heavily in improving our tools and enforcement systems to prevent this kind of abuse. We realize we still have a lot of work in front of us before Twitter is where it should be on how we handle these issues."
Twitter co-founder and CEO, Jack Dorsey, requested that Jones DM him last night, so I also inquired whether or not they had discussed her experience with harassment on Twitter. Representatives did not reply, nor did they address my question about how and when they plan to "invest heavily in improving" their tools.
Jones is no stranger to the type of hatred that's allowed to filter through Twitter's porous anti-harassment policies. Opponents of the all-female Ghostbusters reboot have been virulently voicing their distaste with the film's new direction since it was first announced in 2015, claiming its more diverse (albeit, still mostly white) casting decision "ruined their childhood," or was part of a nefarious SJW agenda. Most of these remarks were centralized on YouTube or Reddit, however, the movie's actors were also confronted by their critics on Twitter. Earlier this year, a subset of people who accused the film's writers of stereotyping Jones' character (a black, "street savvy" New York transportation employee) nearly pushed the actress to abandon her account after Twitter users began to personally attack her online.
Former Twitter CEO, Dick Costolo, once admitted the platform sucks at "dealing with abuse and trolls," and that they've sucked at it for years. None of this is new. But hey, Twitter just launched a new application process for fancy blue checkmarks, so if all else fails, maybe try to get verified. Seriously. Will checkmarks for everyone solve the problem of harassment? Probably not, but it's better than the status quo.
Update: Tuesday, July 19:
According to a statement provided to BuzzFeed News and Recode, Twitter has permanently suspended Breitbart editor Milo Yiannopoulos for organizing and inciting targeted abuse online. The decision to revoke his access was reportedly in response to the harassment of actress Leslie Jones, which Yiannopoulos actively participated in and helped to instigate.
This isn't the first time Twitter has taken disciplinary action against the alt-right icon. Earlier this year, Yiannopoulos lost his verification for allegedly violating Twitter's abusive behavior policy—a move that Breitbart likened to "declaring war on conservative media." Yiannopoulos' account, which had nearly 400,000 followers as of today, has been temporarily suspended before, but only for brief amounts of time.
In its most recent statement, Twitter admits that "many people believe we have not done enough to curb this type of behavior." A spokesperson for the company repeated the assurance that the company is investing heavily in better anti-harassment tools, but it remains unclear what exactly that means, and when any changes might happen.
It's worth noting that Twitter's response to the recent attacks has resulted in the banning of only a single user, as far as we can tell. And the company's decision to chop the head off the snake, so to speak, could instigate an avalanche of residual harassment toward other individuals. What is clear, however, is that Twitter has experimented with warnings and permanent suspensions in the past, and yet here we are. Has anything really changed?
Below is Twitter's statement regarding Yiannopoulos:
"People should be able to express diverse opinions and beliefs on Twitter. But no one deserves to be subjected to targeted abuse online, and our rules prohibit inciting or engaging in the targeted abuse or harassment of others. Over the past 48 hours in particular, we've seen an uptick in the number of accounts violating these policies and have taken enforcement actions against these accounts, ranging from warnings that also require the deletion of Tweets violating our policies to permanent suspension.
We know many people believe we have not done enough to curb this type of behavior on Twitter. We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it's happening and prevent repeat offenders. We have been in the process of reviewing our hateful conduct policy to prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted. We'll provide more details on those changes in the coming weeks."