FYI.

This story is over 5 years old.

Tech

A Brief History of YouTube Censorship

For as long as it has existed, YouTube has been under pressure from different governments to remove a wide range of content.
A Brief History of YouTube Censorship
Image: Shutterstock

Over the last few months, YouTube has come under fire for its content moderation decisions. The company, like other hosts of user-generated content, is not obliged to take down most content under US law, nor can it be held liable for much of what it hosts, giving it significant power over its users’ expression.

YouTube’s emergence amid the blogging boom of the mid-2000s was revolutionary. Suddenly, anyone could easily share their own videos with the entire world. The platform, created by Chad Hurley and Steve Chen—both under 30 at the time—was quickly snatched up by Google for a whopping $1.65 billion just a year after its launch. In 2006, Time magazine named “you” its person of the year, dubbing YouTube “the people’s platform” and crediting it with the “opportunity to build a new kind of international understanding.” YouTube was on its way up.

Advertisement

But almost immediately, YouTube was faced with tough decisions about what types of content it should—or could, legally—allow. The Digital Millennium Copyright Act (DMCA) and Section 230 of the Communications Decency Act (“CDA 230”)—two of the most important regulations governing user-generated content—had only been around for a decade, and had not yet been significantly applied in the international sphere. Both would soon come to be tested as YouTube and other platforms rapidly transformed the way people communicate and share content online.

YouTube’s first content struggles came not long after Google’s acquisition of the company, when, in 2006, the Japanese Society for Rights of Authors, Composers, and Publishers successfully issued DMCA takedown requests for more than 30,000 pieces of content. Shortly thereafter, Comedy Central filed a complaint for copyright infringement, resulting in the removal of its content from the platform.

As the company’s struggle with copyright holders grew, activists in a number of countries were sharing videos on the platform to draw attention to local issues. In Morocco, for example, the now-famous “Targuist sniper” posted videos of police demanding bribes from passing motorists that he had filmed from a nearby hill, sparking a national conversation about corruption. Tunisian activists used the platform to share video testimonies of former political prisoners. In response, the governments of both countries blocked YouTube. By 2008, more than half a dozen countries, including Brazil, China, Syria, Thailand, Pakistan, and Turkey had blocked the platform—temporarily or otherwise.

Advertisement

It wasn’t long before YouTube, faced with the possibility of being blocked in even more locales, was forced to take a stronger stance on certain types of content. Among the company’s first controversial decisions came in 2007 when it suspended Egyptian user Wael Abbas. Abbas, an award-winning blogger and anti-torture activist, had used the platform to draw attention to police brutality in Egypt, uploading more than a hundred such clips. Although there were rumors that YouTube’s decision had come at the behest of the Egyptian government, the company later told Abbas that it had removed the videos after receiving numerous complaints from other users.

The decision to suspend Abbas was widely (and rightly) criticized as censorship—and yet, later that same year, after being accused of profiting from footage of children being beaten, a spokesperson for the company claimed that censorship wasn’t its role.

In 2008, YouTube also made a significant policy change, instituting a clear three-strikes rule for (non-copyright) community guidelines violations. The platform gave users a clean slate after six months without a second violation, and in 2010, added the ability for users to appeal strikes that they believed had been wrongfully applied. In doing so, YouTube formalized its role as an arbiter of appropriateness.

Later that year, Senator Joseph Lieberman called on YouTube to remove videos marked with the logos of al Qaeda and other known terrorist groups, but the company refused, claiming that the videos didn’t violate its policies against violence or hate speech. YouTube later changed its tune, adding a rule banning “inciting others to violence.”

Advertisement

In October of that year, controversial British comic Pat Condell uploaded a clip entitled “Welcome to Saudi Britain” in which he ridiculed British Muslims while simultaneously denouncing the “corrupt” Saudi regime. The company removed the video and issued a warning to Condell, noting YouTube’s community guidelines and stating that users who break the rules risk having their account disabled. Notably, the company did not state which rule Condell had violated.

YouTube’s efforts to grapple with its role as an arbiter of expression came to a head amid the Arab uprisings, which were viewed by the world on the platform. As people posted violent videos of protests and ensuing state crackdowns from across the Middle East and North Africa, the company was forced to change its approach to moderation.

“Normally, this type of violence would violate our community guidelines and terms of service and we would remove them,” YouTube’s then-Manager of News, Olivia Ma, stated in May 2011. But we have a clause in our community guidelines that makes an exception for videos that are educational, documentary or scientific in nature…. In these cases, we actually make an exception and say we understand that these videos have real news value.”

Rapid growth

By the end of 2011, YouTube already had around 800 million users (compared to more than 1 billion today), and was ranked by Alexa as the third most-visited website in the world. As such, the company increasingly found itself caught between various actors with competing needs and values—and under pressure from governments to remove content, or risk being blocked.

Advertisement

In 2012, Google noted an “alarming” rise in takedown requests from a range of governments, including a rise in requests from the United States, with a great deal of those requests focused on YouTube content.

It was around this time that the company’s decision making became considerably more questionable. Just a few weeks after the attack on the US diplomatic compound in Benghazi, Libya, YouTube found itself embroiled in controversy. US intelligence claimed that The Innocence of Muslims—an anti-Islamic short film produced by an Egyptian-American Christian—that had been hosted on the platform was partially responsible for the Benghazi attacks and, following a call from the White House, YouTube blocked the video in Libya and in neighboring Egypt, where protests were beginning to erupt (despite the fact that the film had already been shown on Egyptian television.)

The decision was made without input from either the Libyan or Egyptian governments, or civil society organizations from either country, and was followed by further demands from several other countries, including Saudi Arabia, Malaysia, and Pakistan to remove the same video. Strangely, YouTube responded positively to some of those requests while refusing to comply with others—in this case of Pakistan, this decision led directly to the platform being blocked wholesale by that country’s government.

Advertisement

As the war in Syria escalated and ISIS emerged as a major actor in the Middle East, social media platforms took on an even more paternalistic role, and YouTube was no exception. As calls for more censorship of violent and hateful increased, so too did YouTube’s response—but it was inevitable that it would eventually backfire.

Too much power?

Over the past few years, the company has faced significant criticism, both from those who feel it still isn’t doing enough to curb the spread of hateful content, and from those who feel that some of the company’s measures have gone too far. On the one hand, YouTube has invested significant resources into going after extremist content: In 2016, it added automation to its moderation repertoire and last year, announced a plan to “bury” extreme content that doesn’t actually run afoul of its rules. The company also banned advertisements from running alongside videos that contain “hateful” or discriminatory content, and expanded its “trusted flagger” program to include a variety of NGOs designated to report content.

On the other hand, critics point out, YouTube’s policies have had a censorious effect on vital content, particularly videos emerging from the Syrian battleground. In recent years, the company has censored politically biased—but factual—news-focused accounts such as that of MEMRI, flagged accounts that used profanity as inappropriate for advertising, and hidden LGBTQ+ content, including rights advocacy, behind an age-based interstitial.

Even more recently, in the wake of several incidents in which measurable real-life harm have been attributed to YouTube’s influence, the company has taken an even more proactive stance. Following several deaths of youth participating in the “Tide Pod challenge”, YouTube stated that it was working to quickly remove videos of people eating the laundry pods. The company has banned certain neo-Nazi groups entirely, including the Atomwaffen Division. And following the deadly school shooting in Parkland, Florida, YouTube has banned firearm demo videos and content promoting gun sales and has taken steps to remove conspiracy videos, giving Infowars two strikes for its propagation of such content (whether the company is actually willing to mete out a third and final strike remains to be seen).

Today, companies like YouTube faces a difficult challenge. They’re under pressure from different governments to remove a wide range of content, and any decision they make to censor will undoubtedly lead to more censorship. At the same time, they’re beholden to advertisers, shareholders, and, to a lesser degree, their own users, and often forced to strike a balance between competing views of acceptable content.

But this is where leadership has failed, and why tech companies’ attempts to play neutral has been met with derision. YouTube, whether its policymakers and executives like it or not, is already an arbiter of speech, not merely a technology company. And so, the decisions it makes reflects its values, and the values of those who make and enforce its policies. As for what those are? That remains to be seen.