Quantcast
First Porn, Now Extremism: The UK Looks to Expand Web Censorship

Google has given British security officials special power to flag extremist videos on YouTube, and the government's plans don't stop there.

Image: Shutterstock/Piotr Adamowicz

The UK is broadening its arsenal of online tools to censor pretty much anything it doesn’t like beyond the now broadly deployed, controversial, frequently-mocked ISP filters. The latest step isn’t targeted against porn—the usual quarry in the British government’s digital firing line—but extremism. And this time they’re not working with internet providers, but with Google.

The Financial Times reports that the tech giant has given UK security officials special permission to flag videos on YouTube. The “super flagger” status allows authorities to select content for the company to screen immediately. They don’t have to pick out videos one-by-one, either, but can flag whole swathes of material in one go.

The videos in question are those that officials feel may threaten national security. Specifically, they’ll be looking at jihadist content around the war in Syria. This follows concerns of British citizens being drawn into extremism, with hundreds leaving to fight in the country.

In response, the UK government has already stepped up its legal action by taking down extremist content that breaks the law; the FT reports that 8,000 takedowns have been authorised in the past eight weeks, compared to just 21,000 across the preceding four years.

But the key difference between this and their action on YouTube is the fact that the videos targeted will likely not always be illegal.

Security and immigration minister James Brokenshire told the FT that his office was working to do more to combat content “that may not be illegal but certainly is unsavoury and may not be the sort of material that people would want to see or receive.”

That’s an important distinction, and a red flag for those concerned with privacy and Internet freedom. While Google is arguably completely within its rights to decide what should and shouldn’t be allowed on its sites (though even that's controversial given its unrivaled position on the Web), should the government have any input on the issue if the content in question is within the law?

To be clear, Google will still have the final say in actually removing content—but giving UK officials this power to get involved in the process is perhaps a sign that it is slowly relenting to the government’s mission to get Internet companies onboard with monitoring what Brits can and can’t see online. At least with the ISP filters, which include blocks on extremist content as well as the number one target of porn, there's a chance to opt out.

Perhaps more worryingly, Brokenshire also suggested his office might explore how search engines and social media could change their algorithms to limit the display of “unsavoury” content. A move like that would likely be highly controversial with tech companies, and it would raise a lot of questions about web companies’ responsibilities: Should they be responsible for essentially setting a taste level, and deciding what classes as “too extremist,” even if it’s perfectly legal? Or do they have a duty instead to push for an open web, and to limit government interference?

While Google already does something similar to block child abuse images from searches, for example, there is again the issue that “unsavoury” material is very different from illegal material, despite how much most people might dislike the idea of extremist content on the Web.

And then there's the question of efficacy. If the government’s effort to clear up child porn is anything to go by, there will likely be big holes in their methodology. The bigest of which, perhaps, is that there are other video services apart from YouTube.