Terror Scanning Database For Social Media Raises More Questions than Answers

FYI.

This story is over 5 years old.

Tech

Terror Scanning Database For Social Media Raises More Questions than Answers

Facebook, Twitter, Microsoft, and YouTube are joining forces to mark and take down 'content that promotes terrorism.' But who defines what that is?

On Monday, Facebook, Microsoft, Twitter, and YouTube announced a new partnership to create a "shared industry database" that identifies "content that promotes terrorism." Each company will use the database to find "violent terrorist imagery or terrorist recruitment videos or images" on their platforms, and remove the content according to their own policies.

The exact technology involved isn't new. The newly announced partnership is likely modeled after what companies already do with child pornography. But the application of this technology to "terrorist content" raises many questions. Who is going to decide whether something promotes terrorism or not? Is a technology that fights child porn appropriate for addressing this particular problem? And most troubling of all—is there even a problem to be solved? Four tech companies may have just signed onto developing a more robust censorship and surveillance system based on a narrative of online radicalization that isn't well-supported by empirical evidence.

Advertisement

How the Tech Industry Built a System for Detecting Child Porn

Many companies—for example, Verizon, which runs an online backup service for customers' files—use a database maintained by the National Center for Missing and Exploited Children (NCMEC) to find child pornography. If they find a match, service providers notify the NCMEC Cyber Tipline, which then passes on that information to law enforcement.

The database doesn't contain images themselves, but rather, hashes—digital fingerprints that identify a file. This means that service providers can scan their servers without "looking" at anyone's files. Thanks to PhotoDNA, a technology donated by Microsoft, the hashes are made using biometric information inside the photos and videos, meaning that cropping or resizing the files won't necessarily change the hash value being used.

Monday's announcement marks the first time companies have sought to use this kind of technology to combat "terrorist content online." It's an odd match. The hash matching system appealed to many aspects of the fight against child pornography. For one thing it allowed companies to scan for files without finding out anything about non-matching files—so, arguably, without violating anyone's privacy, except with respect to possession of child porn. It also protected people from having to look at child porn in order to identify it—the very act of looking at child porn so it can be removed from the internet can be traumatic to the employees who are policing content on platforms.

Advertisement

Applying the Hash System to Terrorism

Neither of these specific upsides to the hash identification system seem to apply to "terrorist content," since the partnership appears to be aimed at publicly posted social media. (I asked Facebook via email whether the hash identification system would be applied to private messages between users, but did not hear back from the company). Furthermore, the companies have stated in their press release that a person on the other end will be looking at the content before taking it down. The press release implies, but does not explicitly say, that matching hits will not be provided to government officials, the way that hits for child pornography are.

Indeed, one of the useful things about the child pornography hash identification database is that it's managed by a single entity with specific knowledge and experience in the content at issue—that is, NCMEC. The industry database for "terrorist content" that's being proposed will be composed of hashes added by each platform, as they remove content as consistent with their own policies.

Facebook removes "content that expresses support" for groups involved in terrorism or organized crime. Even "supporting or praising leaders of those same organisations, or condoning their violent activities" is banned from the platform. So for example, a video that praises ISIS might get taken down by Facebook. The Facebook employee or contractor taking it down might choose to hash the video, and share the hash through the database. The database will flag the same video when it's uploaded on YouTube.

Advertisement

YouTube's terrorism policy, however, is different from Facebook's—it prohibits terrorist recruitment videos and other content that intends to incite violence. Theoretically, the same video that gets banned from Facebook might pass muster on YouTube. And just because it's in the database doesn't mean that it'll automatically get taken down.

But Facebook, Twitter, and YouTube have all been vigorously criticized for their seemingly haphazard and inconsistent application of their existing policies, whether with respect to nudity, harassment, or copyright infringement. Identifying "terrorist content" implicates just as many tricky questions of judgment.

A video of a guy in a ski mask beheading an American journalist seems like a pretty straightforward case scenario, but the world is a lot more complicated than just ISIS and not-ISIS. Hamas is not only a major political party that wins parliamentary elections in Palestine, it's also been designated as a terrorist organization by many countries, including the United States. The Kurdistan Workers' Party (sometimes called the PKK, short for Partiya Karkerên Kurdistanê) is also designated as a terrorist organization by the United States, even as the US has provided air support for their ground forces to fight ISIS.

"To me, you've built a hammer and now you're asking the world to look for nails."

If the State Department can't be consistent on who's a terrorist and who's not a terrorist, is it really a good idea to entrust that decision to tech companies that are strangely baffled by human nipples?

Advertisement

These worries aren't just speculative. Facebook has, in the past, been criticized for its apparent censorship of Palestinian journalists.

Experts worry that the inflammatory way that society and the media deals with terrorism, combined with a corporate-driven takedown process will result in illegitimate censorship. "To me, you've built a hammer and now you're asking the world to look for nails," said Andy Sellars, director of the Boston University / MIT Technology and Cyberlaw Clinic. "This is a system that encourages over-reporting."

Although the press release emphasizes that the database only flags content, and removal will not be automatic, Sellars said that he didn't see a world where tech companies would hesitate to remove content flagged as "terrorism-promoting."

And flagging content based on hashes focuses on the content, not the context or the message sent. Sellars pointed out that child pornography "is really the only place where media is contraband by its very definition." An ISIS recruitment video, on the other hand, shifts in meaning and effect when being shared by journalists, or by social scientists studying extremism.

This point is echoed by others. Hugh Handeyside, a staff attorney at the American Civil Liberties Union (ACLU) said, "This kind of digital hash system has been used to identify child pornogrpahy but child pornogrpahy and so called terrorism content are not really comparable. The first is always illegal, the second may be news."

Advertisement

And in a January 2016 interview for On the Media, John Horgan, a professor of psychology at Georgia State University who specializes in studying terrorist behavior, said: "Pedophilia and terrorism are quite different. I mean, there is no clear sense of what involvement in terrorism is. Terrorism can involve anything from browsing radical sites to donating money to questionable sites, to far more extreme activity of wanting to go overseas to become a foreign fighter or constructing bombs." (I emailed Horgan for further discussion but he did not respond in time for this article's publication).

Does "Content Promoting Terrorism" Lead to Actual Terrorism?

Technology built to address child porn might be a bad fit for combating "terrorist content," but even more troublingly, there isn't consensus on whether terrorists are converted by extremist content on the internet. In fact, there's a lot of evidence that says they aren't.

Handeyside at the ACLU said that "decisions about what constitute terroristic content are often based on theories about radicalization and violence that research and studies have debunked."

Horgan is similarly critical of the narrative of online radicalization. He said that there is no single profile of a terrorist, or what makes someone into a terrorist. In fact, it seems that "radicalization" might not even be part of the process.

"There is increasing evidence that suggests that people who become involved in terrorism don't necessarily hold radical views. At least not to begin with," Horgan said in January. "In many cases we see people's radical views developing as a result of spending time in a terrorist group. … There are lots of examples of individuals I've interviewed who have said, 'Well, I didn't realize why I became involved in this movement until … I wound up in prison.'"

Advertisement

'How dare you spend all of your time just engaging in social media. You need to get up off your backsides and come out here and join the fight.'

ISIS is certainly very active on social media, and their presence on platforms created by US companies may be alarming for people who otherwise feel like they are distant from the ongoing conflict abroad. But ISIS's social media presence doesn't necessarily translate to real world effects.

Horgan said that even ISIS acknowledges that for many of its supporters, their involvement both begins and ends online. "One of our researchers, Charlie Winter, discovered several senior female jihadis based in Syria complain to foreign supporters here in the United States, to say to them, 'How dare you spend all of your time just engaging in social media. You need to get up off your backsides and come out here and join the fight.'"

Potential Law Enforcement Exploitation of the Database

Monday's press release implies that the partnered companies are not going to be giving the government unfettered access to scan their platforms using the hash database.

"No personally identifiable information will be shared … And each company will continue to apply its practice of transparency and review for any government requests," the press release reads. But it stops short of explicitly stating that it won't, for example, comply with a Foreign Intelligence Surveillance Act (FISA) order similar to the one that got Yahoo to build a scanner that searched the emails of all customers.

Advertisement

Indeed, in that particular case, an anonymous official told The New York Times that the government was scanning for a "digital signature" of a "communications method used by a state-sponsored, foreign terrorist organization" in the emails of all Yahoo mail users. It's not known whether the FISA order was for a hash value. (In fact, other sources have told Motherboard that the description given to the Times was wrong, and that the Yahoo scanning tool was more similar to a rootkit).

Facebook and Google have been criticized in the past for allowing the National Security Agency access to their users' data under the PRISM program, as exposed in the Snowden documents. Ever since the backlash around those revelations, many tech companies have become much more wary about their cooperation with the United States government—a development sometimes referred to as "the Snowden effect."

But even if you can now count on companies to resist forking over information without a legally binding government order, that doesn't say the government can't get a legally binding order that that gives it access to the newly constructed terror-image database and accompanying scanning capabilities.

Sellars, and other lawyers familiar with the law around electronic surveillance, say that a warrant to, for example, scan all of Facebook for a particular image, would lack "particularity"—a requirement under the Fourth Amendment, which bars unreasonable search and seizure. However, a FISA order might be a different story. The Electronic Frontier Foundation has, and continues to argue, that such orders would be unconstitutional, and there is some debate as to whether FISA or any law could legitimately authorize the order at issue in the Yahoo mail case. But the Yahoo mail case, and other ongoing cases raise troubling precedents for what the courts have allowed to pass muster.

"The ability to transfer this idea from the particular context of identification on online platforms, to a tool of surveillance, is perhaps the scariest thing," said Sellars. Whether and to what extent the government can hijack the technology that Silicon Valley is building for its own use remains to be seen. It's not clear that the law allows law enforcement or intelligence agencies to be able to scan entire social media platforms for particular images or videos. The only thing we know is that it's not just technically possible, but that four companies have partnered to build out the infrastructure to do it.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.