"Because of these principles, we do not actively monitor and will not censor user content, except in limited circumstances described below."
– the old Twitter Rules, 2009 – 2015
The first Twitter Rules were fairly slim: 568 words, divided up under the headings of Impersonation, Privacy, Violence and Threats, Copyright, Unlawful Use, Serial Accounts, Name Squatting, Malware/Phishing, Spam, and Pornography.
As of January 2016, the Rules are—at 1,334 words—over twice as long, and now contain sections barring trademark infringement, hate speech, and harassment.
The evolution of the Rules was gradual, with incremental changes introduced in response to legal threats or big news stories. Twitter Verified Accounts were introduced in response to a trademark lawsuit over a fake account in 2009, and many current anti-harassment policies came into place after increasing media coverage of Gamergate. As of this year, the Rules now reflect a ban on hate speech.
The gradual changes in the Twitter Rules reflect a story about Twitter, and shine light on the story that Twitter has tried to tell about itself. The latest changes in the Rules represent a significant rewriting of both the Rules and of the mythology that Twitter projects about itself.
Throughout the years, Twitter has billed itself as an anti-censorship platform, at one point even calling itself the “free speech wing of the free speech party.”
But changes in the Rules over time reflect the pragmatic reality of running a business. Twitter talked some big talk, but it has buckled under both lawsuits and media outrage, tweaking and changing the Rules around speech whenever something threatened its bottom line. For a business, free speech can only be a meaningful value if it doesn’t really cost anything.
With the newest change to its rules, Twitter rewrites itself and its relationship to free speech.
2009: The First Rules (568 words)
Twitter was founded in 2006, but it wasn’t until 2009, when Twitter had hit around 5 million users, that rules were first mentioned on the Twitter blog.
“A significant part of support queries are in regard to policies and rules of engagement on Twitter,” co-founder Biz Stone wrote in a January 2009 blog post. “As part of this new support improvement, we’re previewing a document called Twitter Rules which will provide more clarity around some of the questions people have when it comes to issues of content and usage boundaries.”
The first Rules were relatively brief, and addressed a very limited set of topics. Small wonder, then, that in the next update, the word count would nearly double.
2009 – 2010: Scammers, Spammers, and Satires (+537 words)
The first big change to the Rules came after Tony La Russa, the manager of the St. Louis Cardinals, sued Twitter in mid-2009 over an account that was “impersonating” him. The suit alleged trademark infringement, cybersquatting, and the violation of the right of publicity.
Although the account’s bio read “Parodies are fun for everyone,” Twitter took down the account within a day of being served with the lawsuit.
From the complaint in La Russa v. Twitter
The unexplained takedown of the parody La Russa account in response to the legal threat was the beginning of the Verified Accounts feature—the blue checkmark that Twitter dispenses to celebrities, politicians, corporations, and journalists to identify certain accounts as “real.”
But it was also the beginning of a long and celebrated tradition in which Twitter would build its anti-censorship mythos while grappling with the practical realities of running a for-profit business.
In Stone’s foreword to The F***king Epic Twitter Quest of @MayorEmanuel (2011), the book version of Dan Sinker’s parody account of Rahm Emanuel, the Twitter co-founder tells a story about wrangling the content policy over Google Blogger with a group of executives and lawyers in 2003.
The executives wanted to get rid of blogs that were impersonating celebrities, even if they were humorous. Biz Stone and Jason Goldman (later the vice president of product at Twitter) resisted.
Finally, a “soft-spoken Google lawyer” named Alex Macgillivray came to their rescue, formulating a rule that would govern impersonations going forward. “Would a reasonable person understand that this isn’t real?” If so, the account would not be removed. (Macgillivray became Twitter’s general counsel in September 2009.)
“On Twitter, our rules are clear,” wrote Stone in The F***king Epic Twitter Quest foreword. “Impersonation is pretending to be another person or entity in order to deceive and may result in permanent account suspension, but parody is encouraged. We even suggest ways users can indicate that an account is not impersonation, such as a bio that distinguishes the account as parody.”
But that clearly wasn’t enough to protect the fake La Russa account in the face of a trademark lawsuit.
Meanwhile, Twitter added a section to its Rules banning trademark infringement on its platform.
If the changes to the Rules are any indication, between 2009 and 2010, a scourge of fakery had descended upon Twitter. Of the 447 words added to the Rules, 353 dealt with “spam and abuse”—including selling user names, selling followers, “following and unfollowing people in a short time period, particularly by automated means,” or sending “large numbers of duplicate @replies.”
As more and more users came to Twitter, so did the spammers. And with them came a litany of rules regulating both the content and frequency of their speech. No one cared then, and still no one cares now. Spam is annoying, after all, and something must be done. Today, the bulk of the Rules still deal with spam.
Twitter cofounder Jack Dorsey with President Obama during a 2011 Twitter town hall at the White House. Image: Pablo Martinez Monsivais/AP
2011 – 2012: “The Free Speech Wing of the Free Speech Party” (+21 words)
2011 and 2012 were a quiet period for the Twitter Rules. Aside from a few tweaks to the wording, nothing much changed. Meanwhile, this was the golden age of the Twitter mythos. Twitter (successfully) fought a gag order over a subpoena in a grand jury investigation of Wikileaks, and (unsuccessfully) fought a subpoena for the data of Occupy Wall Street protester Malcolm Harris.
The general manager of Twitter UK called the company “the free speech wing of the free speech party.” The New York Times ran a hagiography of Twitter lawyer Alex Macgillivray, tying the company’s identity to the ideal of free speech.
“Twitter has deftly built something of a reputation for protecting free speech, even unpopular speech,” said the Times. “…It has allowed itself to be used by dissidents in the Arab world and the activist hackers who call themselves Anonymous. It has repeatedly faced pressure from governments in countries where it does business.”
“Macgillivray… ‘doesn’t give a shit’ when the government comes knocking"
The golden age of Twitter’s reputation stretched out into early 2013, as the Snowden leaks came rolling in. At The Verge, Motherboard managing editor Adrianne Jeffries pointed out that while Google and Facebook had been dealt a black eye by the revelation that they had collaborated with the NSA, Twitter had emerged looking clean as a whistle.
“Twitter’s refusal to join PRISM highlighted the fact that the company has a history of being uncooperative, and often antagonistic, when the government asks for user data,” Jeffries wrote then. As in other accounts, much of this could be traced back to Macgillivray. One source told her, “Macgillivray… ‘doesn’t give a shit’ when the government comes knocking with demands and intimidation.”
But in the summer of 2013, everything changed.
2013: The Report Abuse Button is Introduced (+103 words)
From 2011 to the beginning of 2013, the Rules underwent some clipping, pruning, and light copy-editing, eventually dipping to 1,054 words.
But then in July 2013, public indignation exploded as British feminists began receiving a flood of rape threats on Twitter.
It began with something as simple as activist and writer Caroline Criado-Perez demanding that Jane Austen replace Charles Darwin on a new bank note. A torrent of misogynistic abuse came at her on social media. The furor got bigger and bigger until it even sucked in a female member of Parliament, Stella Creasy.
Part of the outrage—spurred on by the British press—was fueled by a difference in cultural values. In the UK, the whole episode would end with actual jail time for three of the tweeters—an outcome that, if not unthinkable, would be highly controversial in the US.
But the other thing was that the incident revealed how poorly equipped Twitter was to deal with harassment. In response to the media blow-up, the company added a “Report Abuse” button, near where “Report Spam” had already been for a long time. Before, targets had had to find a separate web form and fill out a complaint for every instance of harassment. Spam, however, was easy to flag. This oversight seemed like cold indifference on the company’s part—or at the very least, unthinking negligence.
And the blatant misogyny in the abuse of Criado-Perez and Creasy left a bad taste in everyone’s mouth. Over the next two years, public awareness of the online harassment of women would grow, and Twitter’s reputation for free speech would go from admirable to villainous. Twitter started off as the good guy, while the government was the bad guy. And bizarrely, in the event that started it all, Twitter became a heartless internet overlord whose victims were members of British Parliament. The script was completely flipped.
One hundred and three words were added to the Twitter rules, under the caption “Targeted Abuse.” The words reflected what had been widely reported about the abuse against Criado-Perez and Creasy.
“You may not engage in targeted abuse or harassment,” the rules said. “Some of the factors that we take into account when determining what conduct is considered to be targeted abuse or harassment are: if you are sending messages to a user from multiple accounts; if the sole purpose of your account is to send abusive messages to others; if the reported behavior is one-sided or includes threats.”
Although Twitter began to approach speech very differently after Macgillivray left, it would be naïve to think only one man made that big of a difference. By August 2013, the public discourse about Twitter and “free speech” had already changed. Beyond that, Twitter made its IPO in November, placing the company under new and different kinds of pressures.
2014: The Year of Gamergate (+30 words)
For Twitter, August 2014 started off very poorly. Robin Williams died, and when Zelda Williams publicly expressed her grief on Twitter, trolls tweeted nasty Photoshopped images of her deceased father. Williams very publicly left Twitter, saying, “I'm sorry. I should've risen above. Deleting this from my devices for a good long time, maybe forever. Time will tell. Goodbye.”
In response, Twitter announced changes to its policies, saying, “We have suspended a number of accounts related to this issue for violating our rules and we are in the process of evaluating how we can further improve our policies to better handle tragic situations like this one.”
But the worst was yet to come. Both the Williams and the Criado-Perez incidents had sparked public outrage, but Gamergate would become the controversy that would never end. The constant targeted harassment of Zoë Quinn, Anita Sarkeesian, and many other people in the games industry made headlines and took up television airtime.
Online harassment is hardly a new phenomenon, but mass attention on Gamergate—and on the gendered dimensions of online harassment in general—refused to die, partly because countless media figures became targets. No wonder then that there would eventually be a bidding war over the rights to produce the Gamergate movie, based on a forthcoming book by Quinn.
In the end, not much changed for the Twitter Rules in 2014, but the things that happened that year would spur on unprecedented changes all throughout 2015.
2015: “We have to keep Twitter safe” (+67 words)
Only 67 words were added to the total word count of the Twitter Rules in 2015. Those 67 words were mostly taken up by an expansion of the ban on pornographic profile, header, and background images. The ban now also included “excessively violent media.”
But 2015 saw huge changes in Twitter’s policies around speech. The Rules themselves didn’t change much, but the page linked out to additional resources sprinkled out through the rest of the Support pages that expanded Twitter’s policies in radical ways.
These changes in Twitter policy were accompanied by an op-ed in the Washington Post by Macgillivray’s successor, Twitter general counsel Vijaya Gadde. “Freedom of expression means little as our underlying philosophy if we continue to allow voices to be silenced because they are afraid to speak up,” she wrote. “We need to do a better job combating abuse without chilling or silencing speech.”
In March, in a page separate from (but linked to by) the Rules, Twitter banned revenge porn.
By April, in another page separate from the Rules, the company also prohibited “threatening or promoting terrorism,” as well as “promot[ing] violence against others… on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability.”
Twitter wasn’t using the term “hate speech,” but the company had effectively banned hate speech.
When asked for comment, a Twitter spokesperson contested this characterization, saying the company does not prohibit hate speech. “‘Hateful conduct’ differs from ‘hate speech’ in that the latter focuses on words. It’s the incitement to violence that we're prohibiting. Offensive content and controversial viewpoints are still permitted on Twitter.”
Twitter is correct in that their definition of “hateful conduct” does not span quite as broadly as the hate speech prohibited in many European jurisdictions. But given that a viewpoint-based restriction on inciting speech is something that’s alien to American law, and the “hateful conduct” classification looks just like a subset of hate speech, it seems a bit like splitting hairs here.
No doubt there were questions around the language in the rest of the April update. By August, the company had reformulated the phrasing to clarify that it definitely included indirect threats, marking a massive departure from the original rules set out in 2009, which had explicitly limited the prohibition on threats to “direct” and “specific” threats.
Indeed, the new ban on indirect threats contradicted the Rules page, which used the “direct, specific threats” phrasing it had inherited from 2009. But other support pages clarified that Twitter prohibited not only hate speech and indirect threats, but also the “incitement” of harassment—speech that wasn’t a threat per se, but was intended to result in threats regardless.
August also saw an expansion of what could get reported on Twitter. Twitter users could now report threats of self-harm, upon which the company would “take a number of steps to assist the reported user,” like providing contact information for mental health resources.
Twitter would most prominently enforce its new policies when it permanently disabled the account of controversial blogger Chuck C. Johnson, after he tweeted what appeared to be a threat at Black Lives Matter activist Deray Mckesson.
2016: The End of the Free Speech Party (+178 words)
On December 29, 2015, Twitter unveiled its new Rules. One hundred and seventy-eight words were added, but nothing in the new rules were technically new policies—as described above, the policy changes had already been rolled out elsewhere and were being actively enforced. Every major change—the bans on hate speech, promotion of terrorism, incitement of harassment, and revenge porn—had already been in place for months. But they were at last being laid out in the Rules, which had been largely unchanged for years.
The Rules had inherited a basic structure over the years, but this update moved sections around, and created a new, very robust subsection devoted to the rules against harassment. But more importantly, the preamble to the Rules, which had been unchanged since 2009, was completely rewritten.
The old preamble had focused on giving users both rights and responsibilities. It described a generally hands-off policy, even ending with a promise not to “censor user content, except in limited circumstances described below.” The full preamble read as follows:
Our goal is to provide a service that allows you to discover and receive content from sources that interest you as well as to share your content with others. We respect the ownership of the content that users share and each user is responsible for the content he or she provides. Because of these principles, we do not actively monitor and will not censor user content, except in limited circumstances described below.
In contrast, the new preamble reads:
We believe that everyone should have the power to create and share ideas and information instantly, without barriers. In order to protect the experience and safety of people who use Twitter, there are some limitations on the type of content and behavior that we allow. All users must adhere to the policies set forth in the Twitter Rules. Failure to do so may result in the temporary locking and/or permanent suspension of account(s).
The new preamble doesn’t mention censorship, and it doesn’t mention users’ responsibility for their own tweets.
The newest reiteration of the Rules isn’t radical for introducing new policies, since it doesn’t. Hate speech (or as Twitter puts it, “hateful conduct”) has been banned since April of last year, and other indirect threats and incitement of harassment has been banned since August.
The new Rules are radical because they rewrite Twitter’s story of what it is and what it stands for. The old Twitter fetishized anti-censorship; the new Twitter puts user safety first. While Twitter still pays lip service to “the power to create and share ideas and information instantly,” it’s a far cry from the “free speech wing of the free speech party” of 2012.
“Freedom of expression means little as our underlying philosophy if we allow voices to be silenced because they are afraid to speak up,” said a Twitter spokesperson in response to a request for comment—a verbatim quote from Vijaya Gadde’s 2015 op-ed.
The spokesperson went on to say, “Over the last year, we have clarified and tightened our policies to reduce abuse, including prohibiting indirect threats and nonconsensual nude images. Striking the right balance will inevitably create tension, but user safety is critical to our mission at Twitter and our unwavering support for freedom of expression.”
“Striking the right balance” is one way to describe the tightrope that Twitter is walking on. Even while insisting that free expression is an ideological priority, the company has backed away from the full-throated defense of free speech.
In a way, things were easier when Twitter was still the free speech wing of the free speech party. In the golden age of Twitter’s free speech brand, the company was often lauded for doing the “hard” thing when standing up to governments worldwide. In retrospect, this corporate hardheadedness was easier to pull off than what they’re doing now, where speech is policed in the name of free speech.
This isn’t to say that Twitter has arrived at a contradiction. This new state of affairs is probably an inevitable conclusion that reflects the complexities of human interaction under democratic ideals. We’ve long known that speech can censor other speech—it’s a First Amendment problem known as the “heckler’s veto.” But again, Twitter’s new approach is hard to pull off. The Twitter of today strikes an uneasy balance between its old self and the unapologetic, ideologically-unburdened censoriousness of Facebook and Instagram. It remains yet to be seen whether the company has the vision and creativity to live out its new identity.