FYI.

This story is over 5 years old.

Tech

Leaked Documents Show How Instagram Polices Content to Prevent ‘PR Fires’

Besides revenge porn and terrorism, flags for moderators include ‘Nazi,’ ‘Cartel,’ and ‘Gang,’ according to leaked training documents for Instagram workers.
Image: Shutterstock

This piece is part of an ongoing Motherboard series on Facebook's content moderation strategies. You can read the rest of the coverage here.

Newly leaked documents obtained by Motherboard detail some of Instagram’s policies for policing offensive or illegal content on the platform. The Facebook-owned company has to deal with drug dealers using the service to sell their wares, terrorists using Instagram to push propaganda, and people publishing revenge porn too.

Advertisement

The documents highlight the barrage of issues facing social networks, and come after Motherboard reported on similar documents related specifically to Facebook, which showed the social media giant’s constantly shifting approach to sexual exploitation material.

“These are high intensity, prevalent abuse types that have led to PR fires on Instagram,” one of the training slides obtained by Motherboard reads, referring to some of the more problematic content that sometimes appears on Instagram, such as terrorism or drugs. Instagram did not dispute specific and non-public details included in the documents when Motherboard asked about them.

According to the slides, Instagram may disable an account if two or more parts of the profile—such as username, bio, or profile—are violating the site’s policies. Or, if the profile is empty and without content, Instagram may disable it if only one part of the account counts as a violation. Instagram may ban an account when over 30 percent of its media is violating site policies, the documents add.

Caption: A section of the training materials listing terrorism, nazi, cartel, gang, and other abuse types. Image: Motherboard

The training materials also break down when to ban an account for its interactions with another user’s content. Again, if 30 percent or over of the user’s comments on other people’s posts violate Instagram terms, moderators are told to ban the offending account.

There are some types of content that Instagram provides far-less leeway for, though.

“We’re comfortable with a lower threshold for the abuse types listed below,” one slide reads. The presentation then lists drug sales, revenge porn, sexual solicitation, suicide and self-injury (SSI), and terrorism. In these cases, all it takes is one part of the profile to be violating Instagram’s policies to lead to a block.

Advertisement

A screenshot of a tool seemingly used by moderators lists various different options for flagging offending content. As well as the previous examples, other options include “Nazi,” “Cartel,” and “Gang.” This reporter has previously covered how Mexican drug cartels and London crime groups have used social media, sometimes including Instagram, to taunt rivals or flaunt their wares. Motherboard also found a reseller linked to a company which allegedly sold encrypted phones to the Sinaloa drug cartel advertising their devices on Instagram.

Other options given to Instagram moderators include graphic violence, coordinating harm, and “credible threat.”

Got a tip? You can contact this reporter securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.

Instagram’s training material tells moderators to follow a so-called ‘progressive review’ process, where they move from one part of the profile to another to check for potential violations. The moderator will move from the username, to the profile bio, its picture, any media, and then comments, the training material says. Although particular elements of an account may not individually violate any policies, the motivation behind the account, such as for spreading revenge porn, may become clearer as the moderator reviews each part one by one.

Last week Motherboard reported about Facebook’s shifting policies on sexual abuse content, and specifically dick pics. Naturally, Instagram has to deal with a spectrum of sexual exploitation material too.

Advertisement

The Instagram training manuals tell moderators to flag a selection of material as “sexual exploitation—other,” including necrophilia, bestiality, ‘crushing,’ and creepshots. Crushing is where people film themselves or others harming animals for sexual gratification, and creepshots are close-up and often revealing photos of women taken in public, typically without their consent. As Motherboard recently reported, Tumblr has a huge problem with creepshots.

Unsurprisingly, some of the language in the Instagram training material crosses-over with similar documents for Facebook moderators. Facebook bought Instagram in 2012 for $1 billion. According to Instagram, the two companies share the same Community Operations team, and their content policies are very similar. Instagram added that it has a team of moderators that respond to reports 24 hours a day, seven days a week. In all, those moderators speak 40 languages and include child safety, hate speech, counter-terrorism and legal experts, Instagram said. The training slides say that for child exploitation images, the issue will be sent over to a specialized team at Facebook headquarters.

There are, however, some differences. Instagram said it has another set of policies around Instagram hashtags, for example.

Instagram told Motherboard that if a user sees content that breaks the site’s guidelines, the in-app reporting mechanism is the quickest way to have profiles or posts reviewed.

Update: This piece has been updated to include additional information from Instagram.