The EU Says Protecting Free Speech Means Deleting Things from the Internet
In the wake of the Charlie Hebdo attacks, the EU has mixed ideas on freedom of expression.
And so it begins. Following the terrorist attacks on the offices of French satirical magazine Charlie Hebdo, the EU has issued a joint statement to condemn the act and work to prevent extremism and safeguard freedom of expression. The leaders' suggestion? More surveillance and internet censorship.
The statement, adopted by EU representatives including UK Home Secretary Theresa May, focuses on addressing radicalisation "in an early stage." It condemns the January 7 attacks, in which two Islamist gunmen killed 12 people, and specifically mentions the internet as a factor in the "fight against radicalisation."
"We are concerned at the increasingly frequent use of the internet to fuel hatred and violence and signal our determination to ensure that the internet is not abused to this end, while safeguarding that it remains, in scrupulous observance of fundamental freedoms, a forum for free expression, in full respect of the law," the statement reads.
"With this in mind, the partnership of the major internet providers is essential to create the conditions of a swift reporting of material that aims to incite hatred and terror and the condition of its removing, where appropriate/possible," it continues.
The EU statement does not go into details of how flagging and removing content would happen—for now it's just a memo.
But commenters have pointed out the potential incongruity of fighting for freedom of expression by removing material from the internet. The Charlie Hebdo attacks followed controversy over the magazine's publication of cartoons depicting Muhammad, and in its wake many have adopted the "Je suis Charlie" ("I am Charlie") phrase in support of free speech.
But what exactly should be protected under freedom of expression isn't always black-and-white. That's starkly illustrated in this statement, which advocates removing material in the sentence immediately following a call for online freedom.
What counts as extremist material, and at what point is its removal more important than the general desire for free speech?
To suggest that absolutely everything on the internet should be protected—no matter what—would be naive, but it's not the first time that politicians have tested the limits. In the UK, there has been discussion around flagging "extremist" content on YouTube that is deemed "unsavoury" but, crucially, might not break the law.
UK Prime Minister David Cameron today also said that he will revive legislation popularly known as the "snooper's charter" (officially the Draft Communications Data Bill) if he wins the upcoming general election, which would allow intelligence agencies greater access to communications.
"It is wholly unacceptable for this tragedy in Paris to be used as a means to call for a return of the Snoopers Charter," said Emma Carr, director of Big Brother Watch, in a statement. "It is the wrong solution and would divert resources from focused surveillance operations at a time when the agencies are already struggling to cope with the volume of information available."
Commenting on internet censorship in general, she said that, "Politicians and civil servants should not be the ones who decide what we can and can't see online. If content is to be blocked then it should be a court deciding that it is necessary and proportionate to do so."
It's the big question: What counts as extremist material, and at what point is its removal more important than the general desire for free speech? Perhaps more importantly: Who should be responsible for those decisions?
The EU statement additionally supports countering terrorist propaganda by promoting "positive, targeted and easily accessible messages" aimed at young people considered vulnerable to radicalisation, and stresses the need for greater intelligence sharing. A reduction in the illicit trade of firearms and tougher requirements at border crossings are also included.