FYI.

This story is over 5 years old.

Tech

Google Is Sorry its Sentiment Analyzer is Biased

The company’s Cloud Natural Language API rated being a Jew or homosexual as negative.
Image: Shutterstock

Google messed up, and now says it's sorry.

Wednesday, Motherboard published a story written by Andrew Thompson about biases against ethnic and religious minorities encoded in one of Google's machine learning application program interfaces (APIs), called the Cloud Natural Language API.

Part of the API analyzes texts and then determines whether they have a positive or negative sentiment, on a scale of -1 to 1. The AI was found to label sentences about religious and ethnic minorities as negative, indicating it's inherently biased. It labeled both being a Jew and being a homosexual as negative, for example.

Advertisement

Google has now vowed to fix the problem. In response to Motherboard's story, a spokesperson from the company said it was working to improve the API and remove its biases.

"We dedicate a lot of efforts to making sure the NLP [Natural Language Processing] API avoids bias, but we don't always get it right. This is an example of one of those times, and we are sorry. We take this seriously and are working on improving our models," a Google spokesperson said in an email. "We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone."

Artificially intelligent systems are trained by processing vast amounts of data, often including books, movie reviews, and news articles. Google's AI likely learned to be biased against certain groups because it was fed biased data. Issues such as these are at the core of AI and machine learning research, and those critical of such technologies say they need to be fixed in order to ensure tech works for everyone.

This isn't the first example of AI bias to be uncovered, and it likely won't be the last. Researchers don't yet agree on the best way to prevent artificial intelligence systems from reflecting the biases found in society. But we need to continue to expose instances in which AIs have learned to embody the same prejudices that humans do.

It's not surprising that Google wants to fix this particular bias, but it is noteworthy that the company apologized and pointed toward a goal of building more inclusive artificial intelligence. Now it's up to the company and everyone else working on the tech to develop a viable way of doing so.

Got a tip? You can contact this reporter securely on Signal at +1 201-316-6981, or by email at louise.matsakis@vice.com

Get six of our favorite Motherboard stories every day by signing up for our newsletter.