FYI.

This story is over 5 years old.

Tech

It’s Too Late—We’ve Already Taught AI to Be Racist and Sexist

Now what?
Image: Shutterstock/studiostoks

They say that kids aren't born sexist or racist—hate is taught. Artificial intelligence is the same way, and humans are fabulous teachers.

ProPublica reported, for example, that an algorithm used to to predict the likelihood of convicts committing future crime tends to tag black folks as higher risk than whites. Despite the oft-repeated claim that such data-driven approaches are more objective than past methods of determining the risk of recidivism or anything else, it's clear that our very human biases have rubbed off on our machines.

Advertisement

Consider the case of Microsoft's simple Tay bot, which sucked up all the slurs and racist opinions that Twitter users threw at it and ended up spouting Nazi drivel.

But the prejudice in algorithms may be more understated than that, according to Emiel van Miltenburg, a PhD student in the humanities at Vrije Universiteit Amsterdam. Miltenburg analyzed image descriptions in the Flickr30K database, a popular corpus of annotated Flickr images that's used to train neural networks, and discovered a pattern of sexist and racist bias in the language used to describe the images.

These descriptions are generated by human crowdworkers, and computers have been using them as learning materials to teach themselves how to recognize and describe images for years.

"This is an issue that we are only beginning to become aware of and address"

Miltenburg found that an image of a woman talking to a man at work was labelled as an employee being scolded by her boss, which assumed that the man in the photo is the woman's boss, and not the other way around. In many cases, Miltenburg notes, people who merely look Asian are indiscriminately labeled as Chinese or Japanese. White babies are simply babies, while black babies were often qualified with their race.

"I would not be surprised if some of the crowdsourced image captions in Flickr30K convey racial or gender stereotypes, although this was obviously not our intent," Julia Hockenmaier, the lead researcher behind the Flickr30K database, wrote in an email. "Our aim was to collect factual descriptions of the events and the people shown in the images."

Advertisement

Still, if these biases were played out in the real world, they would look an awful lot like what we might call casual racism and sexism.

"We should acknowledge that there are examples of sexism in the data, and be aware that this is not okay," Miltenburg wrote me in an email from Amsterdam. "People are training machines to look at images from an American perspective. And not just an American perspective, but a white American perspective."

Miltenburg's paper is available on the ArXiv preprint server, and was presented on Tuesday at the Language Resources and Evaluation Conference in Slovenia.

Miltenburg hasn't tested whether software trained on these image descriptions actually generates new, and biased, descriptions. But if machines are currently being trained with data that reproduces age-old human prejudices, then by definition, this means that AI also carries these biases.

"If that's the data that you're providing to AI, and you're challenging AI to mimic those behaviours, then of course it's going to mimic those biases," said Jeff Clune, a professor of computer science at the University of Wyoming who specializes in deep learning. "Sadly, if it didn't do that, then there's something wrong with the technology."

In other words, computers aren't evil, or good, or anything other than electricity pulsing through a wire. Like Microsoft's Tay bot, they're just doing what they're told, albeit on a grander and more unpredictable scale.

Advertisement

One can imagine that relying on software trained on similarly biased data could be problematic when, say, deciding whether or not to give someone health insurance coverage. Even seemingly "objective" information—housing or incarceration rates, and income trends, for example—may harbor systemic prejudice that will be incorporated into AI.

"I think this is an issue that we are only beginning to become aware of and address as machine learning-based models become more sophisticated and more widely used in real-world applications," Hockenmaier wrote.

How can we reverse course?

"You can think about the AI as a human child"

For Miltenburg, the problem is that Flickr30K's method of collecting image descriptions doesn't do enough to mitigate human bias. Annotating images to teach machines should, Miltenburg wrote, be treated more as a psychological experiment, and less like a rote data collection task.

"A good first step is to not just collect descriptions from people from the US, but try to balance the dataset: get workers from Australia, the UK, India, and Hong Kong," Miltenburg wrote. "We also need to have more data about the people who contribute descriptions, so as to control for gender, age, and other variables."

By tightening the guidelines for crowdworkers, researchers would be able to better control what information deep learning software vacuums up in the first place. This is the same idea behind calls to make AI read holy books or The Giving Tree, or any other text that appeals to humanity's better angels.

Advertisement

"One could certainly create annotation guidelines that explicitly instruct workers about gender or racial stereotypes," wrote Hockenmaier.

The other option, Clune said, is to train the software itself to ignore certain kinds of information.

"To some extent, you can think about the AI as a human child," Clune said. "You don't want a child to hang out with racist or discriminatory people, because it will parrot those sentences and predispositions."

The future will involve a lot less coding, and a lot more training. And if Clune is right about AI being like a human child, then we have to act more like responsible parents when it comes to imbuing them with a sense of right and wrong, and keeping an eye on which books they're reading and which movies they watch.

Because it's inevitable that AI, just like a real human, will eventually be exposed to some very bad ideas, intentionally or not. It needs to be taught how to ignore them.