FYI.

This story is over 5 years old.

Tech

It’s Our Fault That AI Thinks White Names Are More 'Pleasant' Than Black Names

A new study reveals some unsettling trends in our mechanical friends.
Image: Shutterstock

We all know that hiring managers can be racist when choosing the "right" (read: white) candidate for a job, but what about computers? If you have a name like Ebony or Jamal at the top of your resume, new research suggests that some algorithms will make less "pleasant" associations with your moniker than if you are named Emily or Matt.

Machines are increasingly being used to make all kinds of important decisions, like who gets what kind of health insurance coverage, or which convicts are most likely to reoffend. The idea here is that computers, unlike people, can't be racist, but we're increasingly learning that they do in fact take after their makers. As just one example, ProPublica reported in May that an algorithm used by officials in Florida systematically rated white offenders as being lower risk of committing a future crime than blacks.

Advertisement

Some experts believe that this problem might stem from hidden biases in the massive piles of data that algorithms process as they learn to recognize patterns. One study, for example, has identified a pattern of sexist and racist bias in the human-applied image labels in a popular machine learning dataset.

Now, for the first time, Princeton University researchers Aylin Caliskan-Islam, Joanna Bryson, and Arvind Narayanan say they've reproduced a slew of documented human prejudices in a popular algorithm that was trained on text from the internet.

"We have found every linguistic bias documented in psychology that we have looked for"

The team first looked to earlier studies of human bias. In the late 90s, psychologist Anthony Greenwald and his colleagues at the University of Washington asked a group of people to complete a simple word association task. On the surface, it seemed innocuous enough: the researchers flashed words in front of their subjects—flowers or insects, for example—and asked them to pair the names with words that these researchers defined as either "pleasant" or "unpleasant," like "family" and "crash."

The results were revealing. Subjects were quick to associate flowers with "pleasant" words and insects with "unpleasant" words, leading the researchers to conclude that positive feelings were subconsciously linked to flowers, and negative ones to insects. The experiment took a dark turn when it asked another set of all-white participants to make the same judgement call for black and white-sounding names. The subjects had a strong tendency to mark white-sounding names as "pleasant" and black-sounding ones as "unpleasant."

Advertisement

In this new study, Narayanan and his colleagues have duplicated a version of Greenwald's experiment—using a massively popular algorithm used to parse natural human language as their subject, instead of humans—along with other similar research. They believe that they've demonstrated, for the first time, that widely-used language processing algorithms trained on human writing from the internet reproduce human biases along racist and sexist lines.

"We have found every linguistic bias documented in psychology that we have looked for," the researchers write in a paper posted online, which is awaiting publication.

Watch more from Motherboard: Inhuman Kind

"If AI learns enough about the properties of language to be able to understand and produce it," the authors continue, "it also acquires cultural associations that can be offensive, objectionable, or harmful."

Narayanan and his colleagues declined an interview with Motherboard, citing restrictions placed on researchers by the journal currently reviewing the paper.

The team used a popular algorithm for processing natural language called GloVe. GloVe is trained on words gathered from large-scale crawls of the web, and maps the semantic connection between words. Since the measurement used in Greenwald's study was the time it took people to connect terms with other words—not really a factor for the algorithm, since everything was represented in one big table—the researchers instead looked at the relative distance between terms in the map that GloVe spat out.

Advertisement

The Princeton team employed the exact same terms that Greenwald and his colleagues used in his experiment, and found remarkably similar results with GloVe. The algorithm more closely associated flowers with "pleasant" words and insects with "unpleasant" words, for example. White-sounding names (Emily and Matt) were also more closely associated with "pleasant" words, and black-sounding names (Ebony and Jamal) with "unpleasant" words.

Perhaps most concerningly, the researchers also tested GloVe with terms used in a 2004 study by Belgian economist Marianne Bertrand that sent thousands of resumes to employers with only one difference: some resumes had white-sounding names attached to them, and some had black-sounding names. Bertrand found that employers favoured white names, and GloVe had a similar bias.

"Inequality in society is first and foremost a human problem that we need to solve together"

Of course, if an algorithm were actually to make hiring decisions, one would hope that its designers would screen out names as a relevant factor. But the point is that the machine had bias similar to that of people.

By systematically replicating psychological experiments, the Princeton study adds an important layer to our growing understanding of machine bias. Not only do datasets contain prejudices and assumptions, both positive and negative, but algorithms currently in use by the research community are dutifully reproducing our worst values.

Advertisement

The question now is: what the hell we can do about it?

"Inequality in society is first and foremost a human problem, that we need to solve together," said Emiel van Miltenburg, a PhD student at Vrije Universiteit Amsterdam who discovered sexist and racist bias in a popular machine learning dataset earlier this year.

"As for automated decision making, we should be aware of these issues so that they won't be amplified," he continued. "At the same time, I believe that technology can be a positive force. But awareness of social issues and transparency about the decision procedure are essential."

Some researchers have suggested limiting what kinds of information AI sucks up—if it only learns the good bits about humanity, then we can expect a machine to act like a really good person. But the Princeton researchers note that this isn't quite right. Every human encounters cultural biases, both good and bad, throughout their lives, and still we learn to take that information and act in an ethical way that may go against the biases we've encountered—ideally, anyway. Exposure to negative biases might even give us the wisdom to act differently.

The question, then, is how to code a machine that can vacuum up humanity in all of its beauty and ugliness, and still act in a way that's not prejudiced. As for what that would look like, the researchers only write that "it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists."

As for how to solve the problem of prejudice in humans, well, that one seems a bit harder to simply engineer away.