Why The Future of Facebook Moderation Is Increasingly Artificially Intelligent

Live videos are next in line to be policy-checked by AI.

|
Dec 2 2016, 2:45pm

Image: Facebook

Facebook is soon handing control of the live videos you see on your news feed over to artificial intelligence, it was revealed this week, as the social network launches an artificial intelligence puffery campaign to teach its customers "artificial intelligence is not magic."

Joaquin Candela, the company's director of applied machine learning, told Reuters that Zuckerberg and his cohorts want to ramp up the use of AI in monitoring content posted to Facebook—AI that would automatically flag offensive material.

That's useful, definitely. There's already been a spate of tragic incidents involving suicides and Facebook Live, the company's live broadcasting platform, and AI would be much quicker and omniscient in flagging that sensitive content for removal or censorship.

Candela told Reuters the AI for monitoring live video is still in the research phase, however, as it comes up against two challenges. "One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down."

The process marks a contrast from Facebook's traditional methods of using humans to sift through content flagged by Facebook users as inappropriate, and then relying on those human employees to check content against Facebook's policies. Of course, this method has failed spectacularly in several recent high profile cases, but would artificial intelligence be any better? According to Candela, the policy AI would use "an algorithm that detects nudity, violence, or any of the things that are not according to [Facebook] policies." Motherboard has asked Facebook for more information regarding its AI plans, but has yet to receive a response.

But to even begin learning how to identify censorable content in videos, Facebook has to teach its artificial intelligence by closer monitoring the actions of Facebook users. Facebook has to gather more information on users to train the artificial intelligence itself. A fundamental part of 'deep learning', the process of machine learning in which algorithms imitate essentially how a brain works, requires huge databases of samples so it can complete tasks that humans take for granted, for example, recognising the individual features that make up an aeroplane but also recognising that object as a whole is an aeroplane. This means every word you type, every photo you look at, and every message you read, could be under the magnifying glass.

Read more: Dakota Pipeline Protesters Claim Facebook Censored Video of Mass Arrest

Artificial intelligence has changed the world. As rightly pointed out by Facebook, on an average morning you've "used artificial intelligence (AI) more than a dozen times‑to be roused, to call up local weather report, to purchase a gift, to secure your house, to be alerted to an upcoming traffic jam, and even to identify an unfamiliar song."

But the recurring theme with Facebook, it seems, is that to give it has to take. Artificial intelligence may not be magic, but Facebook does have one thing in common with magicians: the sleight of hand needed to divert the eyes of the audience. With fake news possibly influencing an entire election, and filter bubbles distracting millions of users, artificial intelligence needs to bring truth to a convoluted, messy internet.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.