FYI.

This story is over 5 years old.

Tech

Social Media Companies Are Not Free Speech Platforms

"The First Amendment doesn’t apply to people who run the internet.”
Image: Alan O' Rourke/Flickr

The aftermath of the US presidential election has been as troubled as the campaign. Hundreds of hate-driven incidents were reported after Donald Trump's victory, the Southern Poverty Law Center attests. And police say some anti-Trump protests turned into riots.

Hate crimes will be documented by the FBI, as they were for last year. But it's harder to measure the proliferation of hateful speech, which is protected by the US Constitution, and left largely unchecked, especially online. That puts private social media companies in a sticky place—left to decide between the unfettered right to free speech, however ugly it may be, and shielding their users from abuse and threat by regulating what they can say. And it leaves us to decide how we want to use platforms that curb this freedom.

Advertisement

"Hate speech can't be a crime because it is protected by the First Amendment," said Wayne Giampietro, a First Amendment lawyer in Chicago. "But the First Amendment doesn't apply to people who run the internet."

While public hate speech can't—and shouldn't, I think—be suppressed, social platforms like Twitter, Facebook, and Reddit are allowed to set their own restrictions to moderate the community they want to foster on their platforms. Using racial slurs or sexist language is protected in public life, but private companies can decide what kind of dialogue they will entertain.

"The First Amendment doesn't apply to people who run the internet."

Twitter's new guidelines released last week demonstrate the company's latest reaction to constant reports of abuse and cyberbullying. The platform allows users to report hateful conversations, block racial slurs and opt out of whole conversations in an attempt to distance abusers from their victims. Think of it as a mechanism to control who steps into your home, versus the inevitability of encountering strangers on the street—which is important for those of us who have dealt with online abuse on multiple occasions.

"They're becoming more responsive," said Sameer Hinduja, a cyberbullying expert and professor of criminology at Florida Atlantic University who has worked informally with companies like Twitter and Facebook. "It's useful to enlist the help of the community."

Advertisement

In the past few weeks, Twitter has taken some of its boldest moves to date. It suspended or banned several members of the so-called "alt-right" movement, including prominent spokesperson Richard Spencer, as VICE reported. That means those who subscribe to the "alt-right" ideology can continue to organize around the "future of people of European descent" in the country, but they will no longer have one of the world's biggest platforms to do so.

Facebook, meanwhile, has been caught in a maelstrom of attention during the election for its arbitrary policing of speech. On the one hand, it changed its strategy after users found out it could be suppressing conservative-leaning news after employees inside the organization spoke out this summer. But Facebook has also came under fire for overcompensating for its mistakes and allowing fake news to populate users' feeds, and said last week it would stop allowing ads to help fund sites without fact-based reporting.

"They've got to walk a narrow line," Giampietro said. "To what extent are we going to suppress certain ideas? And to what extent will you stop something that defames somebody [through fake news]."

First Amendment. Image: Flickr

Other platforms like Reddit are less likely to intervene, regardless of their users' vitriol, as game developer Brianna Wu pointed out. This could explain why white nationalist groups have chosen to communicate primarily through these forums. However, even Reddit, which had long been the paragon of a free-speech social platform, has introduced guidelines and restrictions for its users, including language that threatens, harasses or bullies other users.

Advertisement

While this sounds like a debate that lives on the internet, there is plenty of real world impact.

The US, unlike its neighbor Canada, protects all speech, including hate speech, as long as it doesn't incite violence. But that line is increasingly blurry since communities now grow and organize online. Reddit threads teeming with anti-semitic, anti-LGBT or anti-black comments, for example, are the same ones calling for real-life demonstrations and action.

"We can't just say it's okay because it doesn't incite violence. We focus on youths for that reason—they become traumatized," Hinduja said. At the extreme end, he said, this can lead to suicide and violence, but at the very least it impacts healthy discussion.

He said there will be more measures put into place, whether through machine learning or other filters. And if social platforms fail to address abuse, they will slowly fall off the radar, Hinduja said, much like JuicyCampus, a once popular site that pitted college students against each other in a whirlwind of gossip and anonymous judgement calls.

"The antidote for hate speech is more of the right kind of speech."

But restricting speech online, even to protect from abuse, could also cause companies to lose users. Or it could drive them to siloed platforms—some "alt-right" members are now asking people to join unrestricted platforms like Gab.ai instead of Twitter, so no one gets in the way of their conversations. And Reddit regulations have led the same crowd to forums like Stormfront, which claims to support these "racial realists".

Advertisement

Giampietro said the only actual way to combat this kind of hate speech is to drown it in reasonable rhetoric. "We'll never be able to—and we shouldn't—prevent anyone from speaking," he said. "The antidote for hate speech is more of the right kind of speech. And denouncing those people, and demonstrating to the world what idiots they are for thinking that way."

This might sound idealistic, but it's not impossible. Earlier this month, for example, Beverly Whaling, a mayor in West Virginia, commended a tweet that called First Lady Michelle Obama "an ape in heels" on Twitter, earning her intense backlash on social and news media. Just a couple of days later, the mayor resigned under pressure saying she regretted the "hurt it may have caused."

In such a case, it was a combination of Twitter's online community, media attention and cross-platform sharing that put pressure on the mayor to resign—cementing the connection between hate speech online and offline.

It's important to note that online hate speech, and hate crime, did not begin with Donald Trump and his supporters—nor will it end with his presidency. Fighting for civil rights is an ongoing process, and any choices that Twitter and Facebook make will not necessarily mean people won't carry out the same hateful conversations, or the antidotes to them, in real life.

But Americans are still largely unaware of the algorithms and parameters they're speaking within as they share every political and social view they have on social media platforms. And this election has given us the opportunity to figure out how we can become a more informed public, without the limitations of both censorship and abuse.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.