FYI.

This story is over 5 years old.

Tech

Twitter Suspensions Reveal the Company's Skewed Views on 'Extremism'

Twitter chooses to focus on the faraway spectre of ISIS rather than the neo-Nazis closer to home.
Twitter Suspensions Reveal the Company's Skewed Views on 'Extremism'

Every society engages in censorship. Whether from church or state or otherwise, the desire to suppress information seems a natural human impulse, albeit one variant in all its manifestations. Most of us readily accept certain kinds of censorship—think child sexual abuse imagery—but are reluctant to call it by its name.

The restriction of content we deem beyond the pale is still, in fact, censorship. The word "censorship" is not itself a value judgement, but a statement of fact, an explanation for why something that used to be, no longer is. The American Civil Liberties Union defines "censorship" as "the suppression of words, images, or ideas that are 'offensive', [that] happens whenever some people succeed in imposing their personal political or moral values on others." The definition further notes that censorship can be carried out by private groups—like social media companies—as well as governments. And when carried out by unaccountable actors (be they authoritarian governments or corporations) through opaque processes, it's important that we question it.

Advertisement

According to Twitter's latest transparency report, the company suspended more than 377,000 accounts for "promoting extremism." Twitter said that 74 percent of extremist accounts were found by "internal, proprietary spam-fighting tools"—in other words, algorithms and filters built to find spam, but employed to combat the spectre of violent extremism.

Twitter's deeming of some content—but not other content—as "extremist" is, after all, a value judgement.

Few have openly questioned this method, which is certainly not without error. In fact, the filtering of actual spam inspired more of a debate back in the day—in 1996, residents of the town of Scunthorpe, England, were prevented from signing up for AOL accounts due to the profanity contained within their municipality's name, leading to the broader realization that filters intended to catch spam or obscenity can have overreaching effects. The "Scunthorpe problem" has arisen time and time again when companies, acting with good intentions, have filtered legitimate names or content.

The Scunthorpe problem demonstrates that when we filter content—even for legitimate reasons or through democratic decisions—innocuous speech, videos, and images are bound to get caught in the cracks. After all, you can't spell socialism without "Cialis".

Advertisement

We know that companies, using low-wage human content moderators and algorithms, undoubtedly make mistakes in their policing of content. To err is human, and algorithms are built and implemented by humans, lest we forget. But when a company takes charge of ridding the world of extremism, with minimal to no input from society at large, there's something more insidious going on.

Twitter's deeming of some content—but not other content—as "extremist" is, after all, a value judgement. Although there's little transparency beyond numbers, much of the banned content matches up neatly with the US government's list of designated terrorist organizations. We don't know what kinds of terms Twitter uses to weed out the accounts, but accounts expressing support for Islamic terror organizations seem to make up the bulk of takedowns. Meanwhile, neo-Nazis like Richard Spencer are rewarded with a "verified" checkmark—intended to signify a confirmed identity, but often used and seen as a marker of celebrity.

When we build systems to separate and segment content, we run the risk of them being used to do harm.

By choosing to place its focus on the faraway spectre of ISIS—rather than the neo-Nazis closer to home—Twitter is essentially saying that "extremism" is limited to those scary bearded men abroad, a position not unlike that of the American media. In fact, extremism is a part of our new, everyday reality, as elected officials opt for racist and sexist policies and as President Trump eggs on his most ardent white supremacist fans, offering tacit support for their vile views. As white supremacist hate gains ground, companies seem caught unaware, and unwilling or unprepared to "tackle" it the way they have Islamic extremism.

The question of whether to censor, of what to censor, is an important one, one that must be answered not by corporations but through democratic and inclusive processes. As a society, we may in fact find that censoring extremism on social platforms helps prevent further recruitment, or saves lives, and we may decide that it's worth the potential cost. At that point, we could work to develop tools and systems that seek to prevent collateral damage, to avoid catching the proverbial dolphins in the tuna nets.

Nevertheless, we must be cautious, for any tool that we do build can—in the "wrong" hands—be used for other purposes. Twitter's repurposing of the spam filter isn't the only example: PhotoDNA, a Microsoft-owned technology built to identify child sexual abuse images, is allegedly also used to identify and censor ordinary adult nudity. Numerous small governments have employed commercial home filtering software to censor their citizens' access to information. When we build systems to separate and segment content, we run the risk of them being used to do harm.

Ultimately, Twitter is a company, not a democracy, and it has the legal right to remove whatever it wants—but that shouldn't stop us from questioning and challenging the company's values and choice to treat two types of dangerous extremists so differently. We should demand more accountability and greater transparency—we may be the product, but that doesn't mean we have to acquiesce when corporations impose bad policies on us.