FYI.

This story is over 5 years old.

Tech

Leaked Documents Show How Instagram Polices Stories

Motherboard has obtained internal documents that show how Instagram moderators grapple to police the service's popular Stories feature.
Instagram
Image: Shutterstock

This piece is part of an ongoing Motherboard series on Facebook's content moderation strategies. You can read the rest of the coverage here.

Newly leaked internal documents obtained by Motherboard detail how Instagram polices content published through its Instagram Stories feature, which allows users to publish short videos and static images that generally stay on profiles for 24 hours. The fact that they often have multiple discrete parts can make it particularly difficult to moderate stories, the documents show.

Advertisement

In particular, the documents show how Instagram’s moderators have to grapple with the context of a story. Though an individual photo or video might not violate the network’s terms of service by itself, that can change when taken together with other content from the user.

"Stories can be abused through posting multiple pieces of non-violating content to portray a violating narrative. Reviewing these pieces of content individually prevents us from accurately enforcing against stories,” one internal document used to train moderators, dated October 2018, reads.

Instagram_slide

Caption: A section of the Instagram documents. Image: Motherboard.

Users can archive their stories, keeping them permanently on their profile, but most stories only stay up for 24 hours. This ephemerality may also add to the difficulty of moderating stories.

The documents provide several examples of what sort of violating content Instagram users have posted as stories. These include the sale of drugs and flaunting of weapons, trading and selling sexual imagery, and child exploitation imagery, or CEI. Another Instagram document, previously obtained by Motherboard and published in part in May, showed how Instagram describes some of these sorts of pieces of content and others as "PR fires.”

Supporters of the Islamic State have previously used Instagram Stories to spread propaganda for the organization. Some users tag others in stories in order to harass them.

Got a tip? You can contact this reporter securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.

Advertisement

Because stories are often posted as a sequence of individual photos and videos meant to be viewed one-after-the-other, a single post may not provide the full context needed to determine if it violates Instagram’s rules. The documents say moderators need to establish the intent behind the content by viewing a reported post in context.

“Apply the context gained from each piece of content downwards throughout the entire gallery of photos and videos to establish the intent of the story,” another section of the document reads.

The slides give an example of three consecutive stories. The first piece of content shows the inside of a car with the text "HMU", or "hit me up". The other two show buds of cannabis. Individually, they may not be violating, but taken together, it is clear the user is advertising cannabis for sale, which is against Instagram’s terms of use.

With this in mind, the Instagram document talks about a "holistic" method for reviewing stories. When reviewing a story, moderators are told to also look at up to two adjacent pieces of content before and another two after the reported post to determine whether its intent violates the network's terms, the document adds.

Instagram_slide_2

Caption: A section of the Instagram documents. Image: Motherboard.

Instagram did not respond to a request for comment.

As social media companies have grown and the issue of content moderation has loomed ever larger, some parts of these firms have leaned towards developing more machine-based, automated systems for detecting or flagging individual content. Facebook, for example, deploys machine learning to spot potential terrorist logos in videos. Two sources with direct knowledge of the company’s moderation strategy told Motherboard that Facebook and Instagram largely moderate content in the same way (Facebook owns Instagram.) Motherboard granted the sources anonymity as they weren’t authorized to speak to the press.

But context is something that these systems can struggle with, as Facebook executives previously explained to Motherboard in interviews at the company’s headquarters this summer. These documents, arguably, highlight the stark, remaining need for human moderators on social networks. Facebook has previously said it employs some 7,500 moderators.

“I’d say there can’t be enough people to do the job because not many people can hold this job for more than 6 months. Most people can’t,” one of the sources said,

Discovering that context doesn’t always happen, though.

“These moderators seem to forget context. They look at things for just what they are.”