FYI.

This story is over 5 years old.

Tech

How to Make a Bot That Isn't Racist

What Microsoft could have learned from veteran botmakers on Twitter.

A day after Microsoft launched its "AI teen girl Twitter chatbot," Twitter taught her to be racist.

Really, really racist.

Screencap via @geraldmellor.

The thing is, this was all very much preventable. I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking.

"The makers of @TayandYou absolutely 10000 percent should have known better," thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. "It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet."

Advertisement

Thricedotted and others belong to an established community of botmakers on Twitter that have been creating and experimenting for years. There's a Bot Summit. There's a hashtag (#botALLY).

As I spoke to each botmaker, it became increasingly clear that the community at large was tied together by crisscrossing lines of influence. There is a well-known body of talks, essays, and blog posts that form a common ethical code. The botmakers have even created open source blacklists of slurs that have become Step 0 in keeping their bots in line.

"A lot of people in the botmaking community were perturbed to see someone coming in out of nowhere and assuming they knew better—without doing a little bit of research into prior art," Rob Dubbin, a long-time botmaker, told me.

Dubbin is perhaps best known for @oliviataters, a bot conceptually similar to @TayandYou—a "teen girl" that lives on the internet and learns from other people. "[Microsoft] might have found some things that would have made their day a lot better yesterday if they just thought a little about it," Dubbin said wryly.

"None of my bots is ever going to say the n-word. And it's just a baseline thing."

When interviewing the botmakers, one name kept coming up: Darius Kazemi, worker-owner at Feel Train. Kazemi has been making bots on Twitter since 2012. In 2013, he created wordfilter, an open source blacklist of slurs.

Some of the bots Kazemi builds pull from the Wordnik API—something that Kazemi describes as "a living dictionary of the internet." Wordnik constantly updates its repository in real time.

Advertisement

"It's pulling slang words, so a lot of it is really vulgar and a lot of it is racist, sexist, and ableist," said Kazemi. When he created Metaphor-a-Minute, the first of his bots to pull from Wordnik, he had to come to a decision. "Basically I decided I don't want my bot to say anything I personally wouldn't say to a stranger."

Kazemi didn't care about curse words, but there were other words he didn't want his bots to say. From there, he created wordfilter, a module that "makes sure that none of my bots is ever going to say the n-word. It's just a baseline thing."

"It's easy to make it so that robots never say slurs. You just add one more line," Parker Higgins, another bot maker, said to me. "And I think that's still below the bare minimum, but…"

But @TayandYou didn't even do that.

As Caroline Sinders pointed out, it wasn't as though Microsoft hadn't done any filtering. The bot was set to sensitively handle topics like the death of Eric Garner. In a blog post on Friday, Microsoft said the developers "planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups." Yet somehow the bot still managed to spiral out of control, at one point calling for genocide while using the n-word.

Beyond Blacklists

Racism, of course, can't be reduced to a blacklist of bad words. In the very limited case of online text interactions, racism and other -isms often boil down to impact on the audience, and a lot of that is context-dependent.

Advertisement

"My friend Nick Montfort pointed out once—the word 'boy' is not offensive, but if it's a white man calling a black man 'boy,' it is offensive," Kazemi told me. "You can only do so much with word filtering."

Some technical fixes become fairly elaborate. One of Kazemi's bots is Two Headlines, which mashes up two news headlines to create a joke.

Because Two Headlines swaps subjects in headlines, sometimes it would swap a female subject and a male subject, resulting in tweets like "Bruce Willis Looks Stunning in Her Red Carpet Dress."

"I never really liked those jokes, they struck me as transphobic. 'Look a man in a dress, hahaha,'" Kazemi told me. Kazemi eventually used an open source gender tracker that assigned probability of a binary gender to names. If two subjects in a headline are probabilistically male and female, Two Headlines won't tweet, and instead generate a new joke.

"It's not perfect, it's still adhering to a gender binary," Kazemi acknowledged. But adding rules—even clunky ones—brings his bots in line with his own aesthetics and his own ethics. Broad technical tweaks can make for better art.

"The nice thing about a bot is that doesn't cost anything to throw away content like that," said Kazemi. "I'm just very conservative. I get a ton of false positives."

For example, Kazemi doesn't just filter the n-word. "I filter anything with 'nigg' in it. And yeah that'll get the word 'niggling.' So my bots will never say the word 'niggling.' But there are so many weird slang concoctions that use the phrase 'nigg' in it that I don't want. So I throw it away, and my bots never say 'niggling,' and oh well, it's not a big deal."

Advertisement

Sometimes, however, it's best to not even make the bot in the first place. Parker Higgins tends to make "iterator bots," bots that go through a collection (such as the New York Public Library public domain collection) and broadcast its contents bit by bit. Higgins's @pomological tweets out illustrations from the US Department of Agriculture's Pomological Watercolor Collection—vintage pictures of fruits that have been recently digitized and liberated into the public domain.

Recently, Higgins hoped to make an iterator bot out of turn-of-the-century popular music that had been digitized by the New York Public Library. But quite a lot of the scanned sheet music was, to say the least, extremely racist. So he scrapped the whole idea. "It was acceptable at the time, but that's not what I would want my bot to say," said Higgins. Loosely paraphrasing Darius Kazemi, he said, "My bot is not me, and should not be read as me. But it's something that I'm responsible for. It's sort of like a child in that way—you don't want to see your child misbehave."

"It takes effort and policing at a small scale."

For ethical reasons, both Kazemi and thricedotted avoid making bots that—for lack of a better description—seem like people. "My bots are based in serendipity; the ones that respond to tweets are not trained to respond to any particular content of that tweet, and the joy of interacting with them arises from whatever pattern the human intuits from this randomness," thricedotted wrote to me.

Advertisement

"Most of my bots don't interact with humans directly," said Kazemi. "I actually take great care to make my bots seem as inhuman and alien as possible. If a very simple bot that doesn't seem very human says something really bad—I still take responsibility for that—but it doesn't hurt as much to the person on the receiving end as it would if it were a humanoid robot of some kind."

When talking to Rob Dubbin—whose two-year-old @oliviataters bears a marked resemblance to Microsoft's @TayandYou—it became pretty clear why it was that Kazemi and thricedotted have made that choice.

Like Kazemi's Two Headlines, Olivia Taters can also suffer from accidental racism even though she operates with a basic blacklist of slurs. The algorithm sometimes melds two sentences together that, through coincidence, is offensive. "That's happened a few times," Dubbin admitted.

To soften the potential negative impact on her audience, Olivia only replies to people who are following her—that is, tweeters who have consented to be tweeted at by a robot. But even then, Olivia has been a rather trying child.

"I had to tweak a lot of her behavior over time so that she wouldn't say offensive things," said Dubbin. "I would have a filter in place and then she'd find something to say that got around it— not on purpose, but like, just because that's the way algorithms work."

Most of the botmakers I spoke to do manually delete tweets that are offensive. But Dubbin seemed to do it more often. Maintaining Olivia Taters is an ongoing project. "It takes effort and policing at a small scale."

Advertisement

Throughout the interview, Dubbin expressed shock at the sheer quantity of tweets that poured out of @TayandYou. The slower, hands-on approach he takes with Olivia would be impossible at the rate that @TayandYou tweeted at people. "It's surprising that someone would be like, 'This thing is going to tweet ten thousand times an hour and we're not going to regret it!'"

How to Build Bots Ethically

So what does it mean to build bots ethically?

The basic takeaway is that botmakers should be thinking through the full range of possible outputs, and all the ways others can misuse their creations.

Surely they've been working with Twitter data, surely they knew this shit existed.

"You really have to sit down and think through the consequences," said Kazemi. "It should go to the core of your design."

For something like TayandYou, said Kazemi, the creators should have "just run a million iterations of it one day and read as many of them as you can. Just skim and find the stuff that you don't like and go back and try and design it out of it."

"It boils down to respecting that you're in a social space, that you're in a commons," said Dubbin. "People talk and relate to each other and are humans to each other on Twitter so it's worth respecting that space and not trampling all over it to spray your art on people."

For thricedotted, TayandYou failed from the start. "You absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven't vetted even a little bit," they said. "It blows my mind, because surely they've been working on this for a while, surely they've been working with Twitter data, surely they knew this shit existed. And yet they put in absolutely no safeguards against it?!"

Advertisement

According to Dubbin, TayandYou's racist devolution felt like "an unethical event" to the botmaking community. After all, it's not like the makers are in this just to make bots that clear the incredibly low threshold of being not-racist.

Kazemi and thricedotted's bots play with language and humor and tropes. "I'm making art bots," said thricedotted. "Not #branded millennials."

Dubbin's motivations are similar, though he makes bots that are a little different. "I try to do things that are positive in nature and joyful and celebratory and break Twitter in ways that make people laugh and make people happy," he said.

Maybe one day Tay can reach that standard too.

Read more: How To Think About Bots and Microsoft's Tay Experiment Was a Total Success