The Inherent Bias of Facial Recognition
Image: Surian Soosay

FYI.

This story is over 5 years old.

Tech

The Inherent Bias of Facial Recognition

The fact that algorithms can contain latent biases is becoming clearer and clearer. And some people saw this coming.

There are lots of conversations about the lack of diversity in science and tech these days. But along with them, people constantly ask, "So what? Why does it matter?" There are lots of ways to answer that question, but perhaps the easiest way is this: because a homogenous team produces homogenous products for a very heterogeneous world.

This column will explore the products, research programs, and conclusions that are made not because any designer or scientist or engineer sets out to discriminate, but because the "normal" user always looks exactly the same. The result is products and research that are biased by design.

Advertisement

Facial recognition systems are all over the place: Facebook, airports, shopping malls. And they're poised to become nearly ubiquitous as everything from a security measure to a way to recognize frequent shoppers. For some people that will make certain interactions even more seamless. But because many facial recognition systems struggle with non-white faces, for others, facial recognition is a simple reminder: once again, this tech is not made for you.

There are plenty of anecdotes to start with here: We could talk about the time Google's image tagging algorithm labeled a pair of black friends "gorillas," or when Flickr's system made the same mistake and tagged a black man with "animal" and "ape." Or when Nikon's cameras designed to detect whether someone blinked continually told at least one Asian user that her eyes were closed. Or when HP's webcams easily tracked a white face, but couldn't see a black one.

There are always technical explanations for these things. Computers are programmed to measure certain variables, and to trigger when enough of them are met. Algorithms are trained using a set of faces. If the computer has never seen anybody with thin eyes or darker skin, it doesn't know to see them. It hasn't been told how. More specifically: the people designing it haven't told it how.

The fact that algorithms can contain latent biases is becoming clearer and clearer. And some people saw this coming.

Advertisement

"No one really spends a lot of time thinking about privilege and status, if you are the defaults you just assume you just are."

"It's one of those things where if you understand the ways that systemic bias works and you understand the way that machine learning works and you ask yourself the question: 'could these be making biased decisions?', the answer is obviously yes," said Sorelle Friedler, a professor of computer science at Haverford College. But when I asked her how many people actually do understand both systemic bias and the way algorithms are built, she said that the number was "unfortunately small."

When you ask people who make facial recognition systems if they worry about these problems, they generally say no. Moshe Greenshpan, the founder and CEO of Face-Six, a company that develops facial recognition systems for churches and stores, told me that it's unreasonable to expect these systems to be 100 percent accurate, and that he doesn't worry about what he called "little issues," like a system not being able to parse trans people.

"I don't think my engineers or other companies engineers have any hidden agenda to give more attention to one ethnicity," said Greenshpan. "It's just a matter of practical use cases."

And he's right, mostly. By and large, no one at these companies is intentionally programing their systems to ignore black people or tease Asians. And folks who work on algorithmic bias, like Suresh Venkatasubramanian, a professor of computer science at the University of Utah, say that's generally what they're seeing too. "I don't think there's a conscious desire to ignore these issues," he said. "I think it's just that they don't think about it at all. No one really spends a lot of time thinking about privilege and status, if you are the defaults you just assume you just are."

Advertisement

When companies think about error, they see it statistically. A system that works 95 percent of the time is probably good enough, they say. But that misses a simple question about distribution: Is that 5 percent error spread randomly, or is it an entire group? If the system errs at random, if it fails evenly across all people just based on bad lighting or a weird glare, 5 percent might be a fine error rate. But if it fails in clumps, then that's a different story.

So an algorithm might be accurate 95 percent of the time and still totally miss all Asian people in the United States. Or it might be 99 percent accurate and wrongly classify every single trans person in America. This becomes especially problematic when, for example, the US Customs and Border Agency switches over to biometrics.

TSA agent Bramlet told me to get back in the machine as a man or it was going to be a problem.

Shadi PetoskySeptember 21, 2015

And at the border we've already seen how biometric failures can be extremely painful. Trans people traveling through TSA checkpoints have all sorts of humiliating stories of what happens when their scans don't "match" their stated identity. Shadi Petosky live-tweeted her detention at the Orlando International Airport in Florida, where she said that "TSA agent Bramlet told me to get back in the machine as a man or it was going to be a problem." Since then, several more stories of "traveling while trans" have emerged revealing what happens when a biometric scan doesn't line up with what the TSA agent is expecting. Last year the TSA said they would stop using the word "anomaly" to describe the genitalia of trans passengers.

Advertisement

Facial recognition systems failing, tagging you and your friend as a gorilla or ape, or simply not seeing you because of your skin color, fall clearly into the suite of constant reminders that people of color face every day. Reminders that say: this technology, this system, was not built for you. It was not built with you in mind. Even if nobody did that on purpose, constantly being told you're not the intended user gets demoralizing.

Facial recognition tech is the new frontier of security, as demoed by this Mastercard app at the 2016 Mobile World Congress. But that just means another barrier for the many faces forgotten about in the design stage. Image: Bloomberg/Getty

These examples, according to the people I spoke with, are just the tip of the iceberg. Right now, failures in facial recognition aren't catastrophic for most people. But as the technology becomes more ubiquitous, its particular prejudices will become more important. "I would guess that this is very much a 'yet' problem," said Jonathan Frankle, a staff technologist at Georgetown Law. "These are starting to percolate into places where they really do have trouble." What happens when banks start using these as security measures, or buildings and airports start using them to let people in and out.

So how do companies fix this? For one thing: they have to admit there's a problem. And in a field where CEOs aren't known for their deft handling of race or diversity, that might take a while. One clear thing they can do is hire better: Adding more people to your team can only help you predict and curb the bias that might be endemic to your algorithm.

But there are technological solutions too. The most obvious of which is that you should feed your algorithm more diverse faces. That's not easy. For researchers at universities, there are a few face databases available comprised mostly of undergraduate students who volunteered to have their pictures taken. But companies unaffiliated with an institution, like Face-Six, have to either assemble their own by scraping the web, or buy face databases to use for their systems.

Another thing companies can do is submit their algorithms for outside testing. Both Venkatasubramanian and Sorelle work on designing tests for algorithms, to uncover hidden bias. The National Institute of Standards and Technology has a program that tests facial recognition systems for accuracy and consistency. But companies have to opt into these tests, they have to want to uncover possible bias. And right now there's no incentive for them to do that.

The problem of bias in facial recognition systems is perhaps one of the easiest to understand and to solve: a computer only knows the kinds of faces it has been shown. Showing the computer more diverse faces will make it more accurate. There are, of course, artists and privacy activists that wonder if we want these systems to be more accurate in the first place. But if they're going to exist, they must at least be fair.