FYI.

This story is over 5 years old.

Tech

36 Days After Christchurch, Terrorist Attack Videos Are Still on Facebook

The videos on Facebook and Instagram show sections of the raw Christchurch attack footage, and variations continue to thwart Facebook's moderators and technology.
Christchuch attack
Image: Shutterstock

This piece is part of an ongoing Motherboard series on Facebook's content moderation strategies. You can read the rest of the coverage here.

36 days after a terrorist in Christchurch, New Zealand, live streamed their attack on Facebook, the world’s biggest and most well resourced social media network is still hosting copies of the violent attack video on its own platform as well as Instagram.

Some of the videos, which are slices of the original 17 minute clip, are trimmed down to one minute or so chunks, and are open to be viewed by anyone. In one instance, instead of removing the video, which shows the terrorist shooting and murdering innocent civilians from a first-person perspective, Facebook has simply marked the clip as potentially containing “violent or graphic content.” A video with that tag requires Facebook users to click a confirmation that they wish to view the footage.

Advertisement

The news highlights Facebook’s continued failure to keep one of the most high profile pieces of white supremacist terrorist propaganda off its platform, and which originated on Facebook in the first place.

“That these horrific videos posts which are over one month old are still appearing on Facebook and Instagram documents that Facebook needs to re-think its AI [artificial intelligence] and human moderators,” Eric Feinberg, founder of the Global Intellectual Property Enforcement Center (GIPEC), a cybersecurity company that alerted Motherboard to the videos’ presence, wrote in an email.

One version of the Christchurch attack video on Facebook is not the original footage itself, but a screen recording of the video playing on the attacker’s Facebook profile before it was closed down. Another version of the video on Facebook is a screen capture of someone watching a section of the attack on Twitter.

Got a tip? You can contact this reporter securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.

These permutations of the footage present some issues for Facebook. The company told Motherboard that because users were uploading variations of the video like these, Facebook was also using audio technology to try and detect clips of the attack. (It is also common with terrorist attack footage for some uploaders to, say, add a black border to a piece of content so it bypasses a social media company’s detection systems.) Once removed, the company then adds each video variation to its list of content to automatically block. Facebook told Motherboard is investing in technology and research to identify edited versions of these shorts of videos.

Advertisement

One of the clips shows the terrorist walking up to the first mosque he targeted, and opening fire. The video does not show the full attack, and stops at the 01:15 mark. It still, however, shows the murder of multiple civilians. Other clips on Facebook and Instagram show similar sections of the attack.

“This video was automatically covered so you can decide if you want to see it,” a panel underneath the video reads. Notably, Guy Rosen, Facebook’s VP of Product Management, said in a post published after the attack that Facebook’s automatic detection systems had not spotted the original video.

This variation has been up since around the time of the attack, with comments stretching back four weeks.

Motherboard shared with Facebook a link to one of the videos to provide enough context for Facebook to respond; Facebook removed the offending video.

“The video did violate our policies and has been removed. We designated both shootings as terror attacks, meaning that any praise, support and representation of the events violates our Community Standards and is not permitted on Facebook,” a Facebook spokesperson told Motherboard in an email.

Notably, all of the clips Feinberg found were from Arabic language pages.

To be clear, the original attack live stream and this trimmed version unambiguously violates Facebook’s policies. Facebook removed the original live stream after New Zealand police flagged the video after the attack started. Twenty-nine minutes after the stream started, a Facebook user reported the video.

This week Motherboard showed that off-the-shelf image recognition systems can detect weapons in the Christchurch footage, which could then be used to push a similar stream to a moderator to review. How effective that process would be in practicality is unclear though, in part due to Facebook’s scale.

Subscribe to our new cybersecurity podcast, CYBER.