FYI.

This story is over 5 years old.

Tech

Silicon Valley Is Inserting Its Biases Into Nearly Every Technology We Use

A new book argues that algorithmic bias is a result of the insular, mostly white tech industry.
Image: Shutterstock

In 2015, a Google Photo algorithm auto-tagged two black friends as "gorillas," a result of the program having been under-trained to recognize dark-skinned faces. That same year, a British pediatrician was denied access to the women's locker room at her gym because the software it used to manage its membership system automatically coded her title—"doctor"—as male. Around the same time, a young father weighing his two-and-a-half-year-old toddler on a smart scale was told by the accompanying app not to be discouraged by the weight gain—he could still shed those pounds!

Advertisement

These examples are just a glimpse of the embedded biases encoded in our technology, catalogued in Sara Wachter-Boettcher's new book, Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech. Watcher-Boettcher also chronicles more alarming instances of biased tech, like crime prediction software programs that mistakenly code black defendants as having a higher risk of committing another offense than white defendants, and design flaws in social media platforms that leave women and people of color wide open to online harassment.

Nearly all of these examples, she writes, are the result of an insular, mostly-white, tech industry that has built its own biases into the foundations of the technology we use and depend on. "For a really long time, tech companies thrived off of an audience that wasn't asking the tough questions," Wachter-Boettcher told me during a recent phone interview. "I want people to feel less flummoxed by technology and more prepared to talk about it."

Wachter-Boettcher, a web consultant focused on helping companies improve their user experience and content strategy, said she wrote the book for non-industry readers (the kind who "find Facebook kind of creepy but [can't] quite articulate what is going on"), whom she believes have a right to push back against the industry's flaws.

The following conversation between Watcher-Boettcher and I has been edited for length and clarity.

Advertisement

In the final chapter of the book, you wonder if talking about apps and algorithms during this "weird moment in America" even matters. You write, "Who cares about tech in the face of deportations, propaganda, and the threat of a second cold war?" Why should we care?
I had to write that six months ago, and turns out it's still the same in a lot of ways.

I would say technology is a massive industry that's extremely powerful. It's something that touches literally every aspect of our lives, it's shaping our culture, and it is affecting how we feel. If we ignore technology, or if we treat technology as if it's neutral, we are doing ourselves a disservice.

I look at a lot of these biases as microaggressions, the paper cuts of technology. At an individual level it's maybe not a huge deal that this one online form doesn't accept people who don't identify as male or female. But when you look at those over time, they add up. It's death by a thousand paper cuts, and I think that is important, because it is important to think about humans.

But the other reason the small stuff is important is that the little biases and little failures are red flags for greater abuses. They are visible in ways that a biased algorithm is not visible. If a tech company has paid so little attention to the values of people that it creates an interface that only works if you're straight and cis, or a photo algorithm that only recognizes you if you're white, do you want to trust that algorithm with other aspects of your life? If they cannot make an interface that includes you, do you trust them to make an algorithm that you cannot even see?

Advertisement

Did you get pushback from the tech industry while you were reporting the book?
Some of the pushback has been things like, 'Oh you know, first world problems,' or 'Oh gosh you have too much times on your hands.' It was general dismissiveness. I have not gotten too much pushback of the variety that's actually engaging with the details, yet. The book isn't out.

I think that there are a lot of people who have good intentions but haven't necessarily wanted to do the difficult work of realizing how much of their own perspective is informed and driven by a white supremacist, sexist culture. So much of our worldview is tied up in a history that is messy at best, and that's true for people who make technology as well. It's a lot of people who are well-meaning but think, 'Oh, well, I'm not racist,' as opposed to thinking of the underlying systems and structures that are in place. You're not just not doing the work of undoing bias, you're embedding it into things that are going to outlive you.

One of the really scary things is [the neural network] Word2vec. It was created by Google researchers a couple of years ago to create word embeddings in order to inform natural language processing. They fed it this huge amount of text from Google news articles to give it content to churn through and learn from. It learned things really well. It could answer analogies, it understood relationships between words. But it would also do things, like, if you asked, 'man is to woman as computer scientist is to…' and it would answer, 'homemaker.' That's just a little bit of technology that can be used to process language. But what happens is people do research and build a tool like that embedded with bias.

Advertisement

So apps and algorithms create social feedback loops that can then influence user behavior?
That's kind of the deal of an algorithm. An algorithm has to be tuned to something. It's doing a series of steps and someone decided what those steps were. If you look at the work that's been done on algorithmic software for predictive policing, one of the things it does is say, 'this is a high crime area,' and it sends more police there, and then more police see more crimes, and it's labeled as an even higher crime area. If you have a high population of black people, they'll be over-policed. Although rates of crimes being committed are similar in black and white neighborhoods, arrests are higher in black neighborhoods, so more police are sent there, and it perpetuates.

One of the things that people who make software want to do is pretend they can predict the future, and it's not really about predicting the future, it's about re-inscribing the past. That's what happens if you don't specifically design against it.

What are the most common areas in tech where biases are really pervasive?
If you were going to start paying more attention to the technology you interact with every day, I would say start paying attention to areas where an interface is asking information from you. Anytime it's making assumptions or guesses about who you are or what you want, or anytime where the design or content of that tech product is interacting with your own content. Anything that involves altering photos, or anything that involves them putting their copy around something you created. You see this a lot when tech companies are trying to increase engagement—they'll try to do clever and funny and cute things like surface a post from your past on Facebook.

Advertisement

The other day, Facebook reminded me of my own birthday, as if I would forget. But in the book, you provide a much more jarring example of a father who was shown a Year in Review album, complete with balloons and streamers, featuring a photograph of his young daughter who had died that year.
When a tech company like Facebook assumes everyone who posted something to its platform had a good year, it's essentially assuming that it knows you better than you do. Most tech companies haven't hired many people who actually have training in the social sciences. Soft skills tend to be denigrated, and in that kind of culture—where skills like empathy and communication are not valued—it's easy for people to assume they understand the impact of their world and wildly miss huge assumptions that they've made.

I've known a lot of people who have worked at Facebook who have had a lot of influence on one narrow feature, but those conversations aren't happening upstream where the question is: Should we be doing this in the first place?

Did anything really surprise you in your research?
I didn't find biases I hadn't thought about before, but that might be because of my own biases. I was surprised about just how skewed some tech products could be. I was a little surprised that FaceApp, [a photo editing app] this year, had framed its entire algorithm—its "hotness" setting—around white people. So it essentially had learned what beauty was from white people. If you were using that setting and you were a person of color, it would lighten your skin or take a lot people's noses and make them narrower. It's like it just didn't occur to them. They admitted that it was a problem in their training data set. I was surprised that they could get all the way to market and have it never occur to them that their algorithm would totally screw up for people who weren't white. I'm not surprised that tech companies have loads of bias towards whiteness. I was surprised at how obvious it was—that it was so blatant.

What can tech users do about these biased algorithms?
I think at an individual level, it's hard to feel like you can do something. The answer is always: delete the app. That's a fine choice to make, but I realize there are many apps you don't want to delete. There are many I don't want to delete. The first thing is to look more critically at the technology you use. When technology makes you feel alienated or uncomfortable, for a lot of people an instinct is to feel like they don't "get" it. Whenever you have those feelings, stop and say, 'Wait a second, maybe this isn't about me. Maybe it's about this product, and this product is wrong.' I do think we internalize a lot of this stuff instead of assessing the product we're engaging with. It's not you, it's the technology.

When you find those small visible examples of bias, I would call tech companies out about them. Contact them, tell them on social media. I think they've gotten a free pass for a long time. I do think they need to hear this kind of pushback from people. Particularly when we're getting into anything related to AI and algorithms, it's going to have to come down to regulation. But that's not going to happen if there's not a loud and large number of people who are being critical.

Wachter-Boettcher's book is available in stores starting today.