FYI.

This story is over 5 years old.

Tech

Facebook Exposed Moderators to the Very Suspected Terrorist Groups They Were Monitoring

One moderator fled the country and went into hiding.
igorstevanovic/Shutterstock

Mere hours after Facebook announced it is asking itself the "hard questions," the Guardian newspaper has revealed that a major bug on the social media site exposed the identities of Facebook's counter-terrorism moderators to the very people they were working to take down.

The bug, which was discovered in November 2016, allegedly exposed more than 1,000 workers across 22 Facebook departments

The personal Facebook accounts of the administrators tasked with removing hate speech and terrorist propaganda appeared as notifications in the activity logs of the members of Facebook groups that the moderators were removing. Members of the groups could then view personal details about the moderators.

Advertisement

The incident begs the question: why were the moderators signed in with their personal Facebook accounts in the first place?

The Guardian learned that around 40 of the 1,000 or so affected employees—who were contracted by a global outsourcing company called Cpl Recruitment—worked in Facebook's counter-terror unit based out of the company's European headquarters in Dublin, Ireland.

The newspaper spoke to one of six of these 40 who were classed as "high priority" victims after their personal profile was viewed by suspected Egyptian Islamic State sympathisers. The moderator was forced to quit his job and move to Eastern Europe for five months out of fear of being hunted down and murdered—his story is harrowing.

But the contracted moderators could, at least theoretically, have been using specialised Facebook Workplace accounts by November 2016, rather than their personal accounts. Workplace by Facebook was publicly introduced in October 2016. According to a former Facebook employee who spoke to Motherboard anonymously, Facebook employees used their personal accounts at work until Workplace arrived. At this point, employees started to switch over to Workplace.

Motherboard has asked Cpl Recruitment and Facebook if the policy extends to contractors but has yet to receive an immediate reply.

The moderator who spoke to the Guardian said that upon joining Facebook, he was given two weeks training and was required to use his personal Facebook account to log into Facebook's moderation system.

Advertisement

Still, the flaw had affected Facebook moderators dating back to August 2016, before Facebook had publically rolled out Workplace accounts.

Facebook told Motherboard that it is currently in the process of testing new administrative accounts that will not require using personal Facebook accounts.

But it's hard to defend a protocol that, up until recently, had not placed the safety of Facebook's moderators as the highest priority. These moderators, many dealing with some of the most dangerous terrorist organisations in the world, should not have been asked to sign in to their work platform using personal Facebook accounts.

Facebook told Motherboard that it has now modified its infrastructure to make it much harder for a Facebook worker's information to become available externally.

"In addition to communicating with the affected people and the full teams that work in these parts of the company, we have continued to share details with them about a series of technical and process improvements we've made to our internal tools to better detect and prevent these types of issues from occurring," Facebook told Motherboard.

Joseph Cox contributed reporting.