Journalists Are Not Social Media Platforms’ Unpaid Content Moderators
Image: Drew Angerer / Getty Images

FYI.

This story is over 5 years old.

Tech

Journalists Are Not Social Media Platforms’ Unpaid Content Moderators

During a Senate Intelligence Committee hearing on Wednesday, Twitter CEO Jack Dorsey admitted how much the platform relies on reports from journalists on to counter offending content on the site.

For years tech companies have been getting free content moderation from journalists. Whether it’s Russian manipulation campaigns, non-consensual sexual imagery, or hate speech around the world, journalists have often been the ones unearthing illegal or problematic behaviour on huge platforms, with social networks only dealing with issues once they know that there’s an impending news article coming.

Advertisement

In a recent interview at Facebook headquarters, one senior company employee told Motherboard that Facebook somewhat formalized the process of responding to content moderation-related inquiries from journalists, and has a dedicated system for ‘escalating’ issues highlighted by journalists. This often gets the content in front of those deciding whether to remove it or not more quickly than ordinary user-generated reports.

“One of the main escalations channels that we have is a press escalations channel,” James Mitchell, the leader of Facebook’s Risk and Response team, said.

“Generally, if our press team is working with a journalist externally, all of those inquiries that have to do with content or ads etcetera are going to come through our escalations channels, because we know, hey, well, this is [a] sort of time-sensitive, important, high sensitivity issue that we need to deal with,” he added.

Last month, when explaining why Twitter hadn’t initially banned InfoWars from its platform, CEO Jack Dorsey said “it's critical journalists document, validate and refute [dis]information directly so people can form their own opinions. This is what serves the public conversation best.” Wednesday, at a Senate Intelligence Committee hearing, Dorsey again noted the role that journalists play in counteracting disinformation that spreads and is incentivized on his platform.

“We have this amazing constituency of journalists globally using our service every single day, and they often with a high degree of velocity call out unfactual information,” Dorsey said. “We don’t do the best job of giving them tools and context to do that work, and we think there’s a lot of improvements we can make to amplify their content and their messaging so people can see what is happening with that context.”

Advertisement

But journalists are not content moderators. It is not reporters’ jobs to work in service of cleaning up the platforms of some of the most powerful companies on the planet. Journalists are not employed by Twitter or Facebook or Google, but in reporting content to them that violates these platforms’ rules, or actively pushing back against disinformation, they are fundamentally doing valuable work for them: “We do benefit,” from journalists fighting disinformation, Dorsey said.

That isn’t to say that journalists should not identify and report on social media platforms’ failings, misgivings, or hypocrisy, or identify when a platform is allowing misinformation to rampantly spread. Social networks operate at such a gargantuan scale and have such omnipresence in peoples’ daily lives that they need to be held to account like other more traditional institutions.

A journalist’s job should be to provide information and reporting in the public interest—to serve its readers. For journalists who focus on technology, this often means reporting about things that happen on platforms, or the internal machinations of the companies themselves. Sometimes this will include debunking disinformation. But there is a difference between occasionally explaining why something is wrong and being used by social media platforms as a core part of their strategies to stamp out behavior that violates their policies or as a tool to prevent the proliferation of disinformation (which, in many cases, only requires debunking in the first place because of the incentive structures, reach, and scale of the platforms—often, platforms’ algorithms, which incentivize “engagement,” are the ones that are amplifying the content into the public consciousness.)

Advertisement

“The fundamental reason for content moderation—its root reason for existing—goes quite simply to the issue of brand protection and liability mitigation for the platform,” Sarah T. Roberts, an assistant professor at UCLA who studies commercial content moderation, told Motherboard last month. “It is ultimately and fundamentally in the service of the platforms themselves.”

To be clear, every platform has moderators or algorithms which catch the vast majority of content violations on a platform. But still, many of the highest-profile content moderation decisions seem to be made only after there’s publicity around them. For example, InfoWars was only banned from platforms after a steady drumbeat of reporting surrounding the disinformation and hate speech spread by Alex Jones.

And so relying on journalists to flag individual pieces or trends of content should not be a norm. And journalists can at least take a step back from this blurring of responsibilities.

The dynamic between journalists investigating social networks and the tech giants behind the platforms often plays out the same: a journalist finds some offending content; the journalist approaches the company for comment; the company asks for examples so they can figure out how to respond; the company then deletes the offending content, and declines to comment or provides some sort of blanket, vague statement. Often, companies will remove the content and not respond to the journalist at all.

Advertisement

In these cases, the tech giant has got some free content moderation, while perhaps not providing the journalist much or any context for their piece. This doesn’t happen in every case, and some companies are better than others, but, in our experience, the pattern is established.

At Motherboard, we’ve increasingly decided to withhold specific examples from tech companies when approaching them for an article. Of course, if they really need an example in order to have enough context to comment—if it’s a single specific video for instance—it would probably be best to provide a link to the clip. And we will give them enough context about what the videos are and the issue so they can write a statement explaining why they believe the content should or shouldn’t be removed. But for more general issues, or especially public material that can be found with a simple search, the companies don’t need hand holding to find the videos: indeed, the point is that they could have—and perhaps should have—come across them easily in the first place.

For example, when Motherboard wrote code to monitor how long trivial-to-find neo-Nazi propaganda remained on YouTube, we declined to provide the company with any links to particular examples.

“The idea of this piece was to find how long neo-Nazi videos were staying on YouTube without us interfering (that is, reporting them through YouTube's mechanisms or otherwise flagging them to the company); sending them over will undermine that,” Motherboard wrote to a YouTube spokesperson at the time, after they immediately requested links so they could “review” them.

Advertisement

The spokesperson provided some points on background—where you can’t quote the person, but paraphrase what they say—pointed to a previously published YouTube blog post, and didn’t give a statement. Once the piece went live, it swiftly removed most of videos without needing specific links (We provided one clip after publication, since the point of the piece—seeing how long they would stay on the platform—had been made).

While reporting our in depth feature on how Facebook is trying to moderate its base of 2 billion users, the company said it doesn’t think journalists have become de-facto moderators. But an ever growing list of examples across a slew of social media companies demonstrates otherwise.

YouTube and beastiality, Reddit and deepfakes, Facebook and high profile alt-right leaders. Those platforms only deleted the offending content once reported by journalists. Last week, Facebook announced it had removed a number of accounts belonging to Myanmar military officials. The move came after a Reuters investigation reported on Facebook’s lack of action on moderating hate speech in Myanmar; content which has arguably contributed to violent tensions in the country.

Facebook also told Motherboard many other reporters have stopped sharing links to specific pieces of content until after the stories are published. Facebook believes it is beneficial to everyone if the company is able to remove violating content as quickly as possible.

But, again, journalists are not here to clean up after tech platforms. It is not their job to make Facebook or YouTube a better platform for users.

Got a tip? You can contact Joseph Cox securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.