Where Did the Concept of 'Shadow Banning' Come From?
Illustration: Chris Kindred

FYI.

This story is over 5 years old.

Tech

Where Did the Concept of 'Shadow Banning' Come From?

It's been called shadow banning, toading, being "sent to coventry," and ghost banning—but where did it actually start?

On Thursday, Donald Trump tweeted: “Twitter ‘SHADOW BANNING’ prominent Republicans. Not good. We will look into this discriminatory and illegal practice at once! Many complaints.”

The tweet, seemingly in response to a VICE News article posted last week, uses the phrase “shadow banning,” a term that has been co-opted by conspiracy theorists who say that Silicon Valley is discriminating against them.

Since the early days of the web, a “shadow ban” has been a moderation technique used to ban people from forums or message boards without alerting them that they’ve been banned. Typically, this means that a user can continue posting as normal, but their posts will be hidden from the rest of the community.

Advertisement

In this case, Twitter made changes to how it ranks its search results for accounts from what it considers to be "bad-faith actors who intend to manipulate or divide the conversation," the company said.

Twitter also had a bug that caused some accounts to not be auto-suggested in search results even when people were searching for their names. That means users couldn’t see Pizzagate conspiracy theorist Mike Cernovich’s account in their search results, but Twitter says that hundreds of thousands of accounts were impacted by this issue, and that it didn't impact users based on their political affiliation. The bug didn’t prevent affected users’ tweets from being seen or interacted with, but some conservatives still called it a Twitter “shadow ban.”

Twitter says the bug was just that—a bug, which has since been fixed.

But “shadow banning” continues to be a digital bogeyman, used to shoo away the nuances of social media platforms’ increasingly complicated (and often opaque) content moderation strategies. The term has been used by Ted Cruz and Project Veritas’s James O’Keefe to suggest an anti-conservative bias in Silicon Valley, and the term itself has recently become part of conspiracy theorists' vocabularies, much like "crisis actor" and "the deep state." And while the term refers to a real and commonly implemented moderation technique used on countless internet forums over the years, using the term now often says something about a person’s political beliefs. Shadow banning sounds nefarious, but the term has a much less controversial origin that I spent the last few days tracking down.

Advertisement

Toading and Twit Bits

Before there were “shadow bans,” there was “toading.”

Early internet anthropologist and Motherboard contributor Claire Evans told me that a practice similar to shadow banning—not exactly a ban, but more of a muting—was happening on multi-user domains (MUDs), text-based Dungeons and Dragons-like chat rooms that were popular as early as the 70s. MUD users had “toading,” which was “the act of metaphorically turning someone into a 'toad' as a punitive measure,” Evans told me in an email. Toading someone removed “the flag by which the system recognizes the player-object as a player,” according to Yib’s Guide to MOOing. This effectively made the player invisible to the system and other players.

“Another version of toading was kind of the opposite, moving the player to a public space in the game in order to humiliate them,” Evans, the author of Broad Band said. “Sunlight is the best antiseptic, etc.”

Read more: The Secret History of a Fleeting Pre-Internet Digital Media Channel

Other shadow ban-like moderation tools evolved in the 1980s and 1990s. According to The Telecommunications Illustrated Dictionary, many bulletin board systems (BBS) server access control lists in the 80s (like mod privileges, but for everyone on your server) included flags for each user that admins could toggle individually, granting users access to things like chat, email, or downloads in the BBS. One of these flags was an early version of a shadow ban:

Advertisement

The ‘twit bit’ is a flag that basically labels a user as a “loser,” in BBS parlance. In other words, the twit is someone who has been flagged for limited access because s/he exhibits immature behavior and can’t be trusted with access to any of the powerful features.

In this very thorough Metafilter thread users agree that the earliest concept of shadowbanning came from Citadel-derived BBS servers. User ricochet biscuit wrote: “It was easy then to create a forum on a topic that would enrage a known troll and just set it so the only users who could see it were the troll and myself. I would post a single item and then watch him try to rabble-rouse and wonder aloud why no one was taking the bait.”

Someone claiming to be a former systems administrator of a Citadel BBS board said on the forum Simple Machine that they remember a feature called “coventry,” named after the idiom for ostracizing someone. Admins could set a coventry “bit” on a user and their messages, so that if they did happen to add something worthwhile to the conversation, individual messages could be taken out of coventry, so to speak.

“Most of the time, such users would get increasingly worse for a short period of time, then never come back again,” user labradors wrote. “In some cases, though, they realised that nobody would respond to their rants—only their helpful messages, and they would quiet down to become good, productive members of the board.”

Advertisement

Kill Files, *plonk* and MUDs

At the same time BBS users were sending people out to coventry, people on Usenet newsgroups were managing their feeds with personalized filters. Philip Greenspun, an early internet entrepreneur and pioneer of online communities, alluded to a similar feature in 80s Usenet servers. Back then, “it was possible to establish a ‘bozo filter’ to screen out messages from posters with a track record of being uninteresting,” he wrote on his blog.

Read more: Ten Years of '2 Girls 1 Cup,' the Most Memorable Brazilian Shit on the Internet

I asked Seth Morabito—a Unix programmer who worked at one of the biggest Silicon Valley startups in the 90s and has spent a lot of time in then-obscure online communities—whether he recalled seeing anything like a shadow ban on the forums he frequented. He told me in an email that the closest he could think of happened on Usenet newsgroups, where a decentralized conversation could take place. What he described sounds kind of like the “bozo filter.” If you were in a flame war, or if you were just acting like an asshole, you could be added to other another user’s “kill file.”

The kill file let users create a list of usernames and keywords they no longer wanted to see (similar to “muting” on Twitter.)

“People being people, folks very rarely put anyone into their kill file without letting the target KNOW that they were being put into a kill file,” Morabito said. It became tradition to reply to the victim with a single word: *plonk,* surrounded by asterisks.

Advertisement

“It was more or less public shaming, letting both the target and the rest of the newsgroup know that your buttons had been pushed so hard that you were just going to outright ignore the troublemaker forever more,” he said.

The “Shadow Ban”

Like so much other modern internet lingo, the “shadow ban” seems to have come from the humor website and forums Something Awful.

Rich Kyanka, the creator of Something Awful, says that the site that spawned “weird Twitter” coined ubiquitous internet terminology including “banhammer,” “spoiler alert,” “let's play,” and “AMA.” (Ask Me Anything). He also says that Something Awful mods invented the term “shadow ban” in 2001.

“We would use it as a joke and only do it to people who were intentionally trying to troll others,” Kyanka, who is regular-banned from Twitter these days, told me in an email. “None of the moderators used it very often, and we got rid of it within a year, replacing it with simply banning the person.”

The banned user could easily tell if they were shadowbanned by logging out of their account and looking for their own posts.

Kyanka is banned from Twitter for making jokes that the platform deemed against its community guidelines. Some of those jokes include telling far-right personality Baked Alaska that he “should go to a room and the room should fill full of concrete,” which Twitter deemed a death threat, and telling Nancy Pelosi to "eat the children,” which the platform considered hate speech.

Advertisement

The term shadow banning has persisted in internet forums and subreddits until today. Message boards still have functionalities like this, including vbulletin’s “global ignore” which adds a user to every other user’s ignore list. There’s a subreddit dedicated to figuring out whether you’ve been shadow banned, where a bot will give you the verdict. But even though platforms like Twitter claim they don’t shadow ban users, there are still people who blame their voices not being heard on this real feature of other internet communities.

The Birth of a Conspiracy Theory

The idea that Silicon Valley giants use shadow banning to silence conservatives was popularized by far right publications over the last two years. In 2016, Breitbart published a story claiming that Twitter shadow banning is "real and happening every day," citing a source inside the company. In January, Project Veritas published a selectively edited video of a former Twitter software engineer who explained the concept of shadow banning and how Twitter was using it. Sen. Ted Cruz repeated the claim from Project Veritas during a Senate hearing with Twitter's director of public policy Carlos Monje, who denied that Twitter was shadow banning users.

In a blog post published Thursday, Twitter said unequivocally: “We do not shadow ban.”

In the early days of the web, online communities were small, fractured, and came up with different ways to police themselves, which is why we have all these different, interesting terms like toading and kill files. Today, platforms like Twitter have to moderate millions of people—billions, in Facebook's case. Because of this scale, content moderation is significantly different. Moderators are rarely members of the communities they’re moderating and are instead people who are paid to action content that is reported by users for violating a platform’s rules or content that is surfaced by algorithms.

Advertisement

Read more: How r/the_donald Became a Melting Pot of Frustration and Hate

The types of actions that can be taken against content are also far more sophisticated. Because very few social media sites use reverse-chronological timelines anymore, every piece of content is “moderated” automatically by a platform’s algorithm in some way, making it more- or less-likely to show up in users’ feeds.

Though Facebook and Twitter can and do delete content and ban users for violating their rules, they also have more nuanced ways of moderating content that they believe to be problematic. In the case of Holocaust denial for example, Facebook will sometimes “significantly reduce the distribution of content" rather than outright ban it.

Part of the current confusion about shadow banning is happening because conspiracy theorists are trying to prove an anti-conservative bias from some of the world’s most powerful companies, and “shadow banning” sounds scary. But not all of this confusion is in bad faith: Social media companies have traditionally been terrible at explaining how they do moderate, and its timeline algorithms and rule-making processes are all black boxes that have not been sufficiently explained to the public.

In fact, last month, the special rapporteur to the United Nations’s Human Rights Council issued a 20-page report noting that social media companies “must embark on radically different approaches to transparency at all stages of their operations, from rule-making to implementation and development of “case law” framing the interpretation of private rules.”

It would be a lot cleaner and easier to explain away why people aren’t engaging with your social media posts on a platform purposefully deciding it wants to censor you, but the fact is that algorithms are making a lot of the decisions about what people see and don’t see. Content moderation simply isn’t as straightforward these days.

“I’d say one of the key advantages the old-school Internet has over today’s platforms is that it you were much more likely to just get booted from a community if you didn’t follow the rules,” Evans said.