FYI.

This story is over 5 years old.

Tech

Microsoft's Tay Experiment Was a Total Success

Microsoft wanted its Tay chatbot to be reflective of users. That succeeded!
Image: Twitter

Heretofore, we humans have always been inherently narcissistic in our embrace of progress. From our nervous fascination with sexbots to the anthrocentric blinders that guide our search for aliens, our innovations are rarely more than a mirror shining back on our own needs. And why wouldn't it be selfish? The goal is to make the world a better, more interesting place, but those are naturally subjective targets.

Advertisement

The big question, then, is if we keep pushing technology and discovery further and further, who is it for? This is what Rose Eveleth asks in her new column, Design Bias: tech is reflective of the people who build and design it, which means that if the builders lack a diverse perspective, so will the end product. Using the example of facial recognition algorithms, which have struggled with already-marginalized groups, she writes, "facial recognition is a simple reminder: once again, this tech is not made for you."

These types of biases are not necessarily made with malice, but it's no more excusable for tech to end up being exclusionary simply because its developers weren't aware that they might have blind spots in the first place. Our future is not distributed evenly, and access and inclusion are the most important issues facing the most important industries and areas of development today.

But this week we've seen insight into an entirely different aspect of this gradiated future: What happens when, instead of tech being designed in the image of its designers, it ends up being reflective of each individual user? A truly distributed technology would learn its mores from all of us. Will we like what we see?

Image: Twitter

A whole lot of blogs have been written about Tay, the Microsoft chatbot meant to mimic teens in order to "experiment with and conduct research on conversational understanding," per Tay's official site. Through "casual and playful conversation" on Twitter, Tay was designed with an AI-like ability to learn how humans speak so it could better connect with us—so we can have an "experience can be more personalized for you."

Advertisement

Who is "you," here? At first, the "you" of Tay was a corporate dad stuck in the uncanny valley of teen talk—I'll never stop laughing at Tay's nonsensical original bio, "Microsoft's A.I. fam from the internet that's got zero chill!", which read like a 90s commercial for scrunchies—which is about what you'd expect from a Microsoft research project aimed at teens. No harm, no foul, aside from some awkwardness. But then, thousands and thousands of tweets later, where did Tay end up? As a virulent racist, of course.

The most important democratizing technology on the horizon—truly intelligent AI—is still highly susceptible to the blind spots of its creators

Tay is designed to specifically not be reflective of its own designers, but reflective of whoever interacts with it. In that, it was certainly a success! It's a notable shift in design considerations: the "you" in this case are the people who actually use Tay, which is theoretically not exclusionary at all. Tay has no blind spots, because Tay is constantly learning from its users. Contrast this with Siri, which didn't know how to react to questions about rape or domestic abuse, presumably because it never occurred to the programmers that someone might want to make that search.

If tech is designed purely to learn and evolve in response to its users, not its designers, then design bias would seemingly be a moot point. Presumably, that's why Microsoft built Tay in the first place. If the company recognizes it's not so great at talking to teens, why even waste time trying to build products that speak to what its older engineers think teens are into? Why not build tech that goes out there and finds out and responds on its own? In this ideal scenario, engineers can avoid excluding people by having an AI do market research, communications, and strategy development in real time.

Advertisement

This episode of the Radio Motherboard podcast explores how humans treat bots. It is available on all podcast apps and iTunes.

Hilariously enough, there's bias implicit to what seems like such a democratic ideal: the programmers made a rather rosy assumption about what Tay would be exposed to out there on the wild, wild web. They forgot that building a technology to purely mirror a user base means it will reflect ALL of that user base. And guess what, this is the internet: there are a lot of generally shitty people out there, and they tend to delight in being louder than everyone else.

It's notable that more than a few posts recapping Tay's racist tirades spent precious sentences explaining that it wasn't Microsoft or Tay's fault that the chatbot had learned a bunch of horrible racist tropes and 9/11 memes from its users. It's an algorithm, the argument goes, and an experimental one at that, so let's not burn down Redmond.

Image: Twitter

It's a fair argument to make that we shouldn't blame algorithms and nascent AI for doing dumb things. But this makes for a rather profound conundrum: Who is in control of Tay? Microsoft, much like Twitter, is certainly guilty of being fundamentally unaware of how quickly Twitter can turn into a garbage fire. So while Microsoft presumably did a good turn by recognizing its own blind spots and trying to AI its way out—this also is just an obvious admission of its inability to corral younger users—Microsoft still has shown a fundamental misunderstanding of just how trollish Twitter users can be. That's especially glaring considering just about every major corporate outreach program online always ends up getting trolled. Remember the Coca-Cola Hitler moment?

This matters, not as an excuse to blame Microsoft or Coke for the actions of trolls, but as evidence that the most important democratizing technology on the horizon—truly intelligent AI—is still highly susceptible to the blind spots of its creators. The dream is that an AI that can get to know you—as in you, the person reading this now—will end up being free of prejudice or bias. It'll learn who you are, and exist to help you, and won't inadvertently ignore your needs or say something accidentally offensive.

The more important lesson here is significantly larger: As all of our devices come online and talk to each other and make decisions and actions for themselves based on their study of our own habits, the Internet of Things is going to turn into its own proto-neural network, an AI by proxy that will eventually be filled more and more with actual AI. The dream is big: Tech that works for you to do exactly what it knows you, individual reader, need, rather than what a company thinks you, generic target user group, need.

Yet as Tay shows, humans are not as polished as a marketing-based profile of our humanity would suggest. That means technology that's truly reflective of who we all are will include the obnoxious and the terrible, and the only way around that is the same solution to problems of bias that affect tech now: development and engineering teams that aren't built to be diverse enough to identify their blind spots in addressing users are inevitably going to misunderstand the reality of their users' experiences—at their own peril.