FYI.

This story is over 5 years old.

Tech

Mark Zuckerberg Says Facebook Will Have AI to Detect Hate Speech In ‘5-10 years’

In Silicon Valley, everything is "5-10 years away," which is the same as saying nothing at all.
Image: Shutterstock

We hear so often that artificial intelligence is soon—very soon—going to be driving our cars or doing our lawyering that a reality check is sometimes needed. Facebook founder Mark Zuckerberg let a doozy slip during his testimony on Tuesday in front of a joint Congressional committee that focused on the social network’s data policies.

According to Zuckerberg, Facebook will have effective machine learning tools to automatically detect hate speech in “five to 10 years.” This is unfortunate, because given the circumstances, it’s not entirely clear if Facebook or, frankly, the entire United States will even still exist by then. (For what it’s worth, saying tech is perpetually “five to 10 years” out is a classic Silicon Valley hedge, and two years ago, Zuckerberg said AI could be outperforming humans “in the next five to 10 years.”)

Advertisement

Zuckerberg was responding to a question from Republican Rep. John Thune, who asked what steps Facebook takes to determine hate speech on the platform at present and what some of the challenges are in doing so. In his response, Zuckerberg noted that in Facebook’s early days the company didn’t have AI tools that could automatically flag content, but lately such tools have been implemented. Zuckerberg claimed that more than 90 percent of pro-ISIS or Al Qaeda content is currently automatically flagged by machines, and last year the social network started using automated tools to detect when users may be at risk of self-harm and intervene.

Read More: Facebook’s New Algorithm Combs Posts to Identify Potentially Suicidal Users

“I am optimistic that over a five-to-10-year period, we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re not just there on that,” Zuckerberg said. “Until we get it more automated, there’s a higher error rate than I am happy with,” he added.

Facebook’s convoluted policies around hate speech have long haunted the company. Last year, ProPublica—reporting on internal Facebook documents—noted that the company had hundreds of rules that result in "white men" being a protected group but not "black children” at the time. Such policies have resulted in PR disasters like Berkeley, California-based rapper Lil B being temporarily banned for “hate speech.”

If the Facebook founder’s comment is to be believed, then Facebook is resigned to these sorts of problems—or “error rates,” as Zuckerberg put it—continuing until we have effective machine learning, which is apparently not coming until I’m middle-aged and we all work our day jobs in literal data mines while plugged into virtual reality headsets.

And, given all the well-documented issues around bias in machine learning tools, there’s no guarantee it will fix Facebook’s problems anyway. More likely, as with any new technology, it will introduce a whole host of unexpected issues.

Get six of our favorite Motherboard stories every day by signing up for our newsletter .