FYI.

This story is over 5 years old.

Tech

Microsoft Had to Suspend Its AI Chatbot After It Veered Into White Supremacy

That was quick.

Less than a day after Microsoft launched its new artificial intelligence bot Tay, she has already learned the most important lesson of the internet: Never tweet.

Microsoft reportedly had to suspend Tay from tweeting after she tweeted a series of racist statements, including "Hitler was right I hate the jews." The company had launched the AI on Wednesday, which was designed to communicate with "18 to 24 year olds in the U.S" and "experiment with and conduct research on conversational understanding." It appears some of her racist replies were simply regurgitating the statements trolls tweeted at her.

Advertisement

Tay also apparently went from "i love feminism now" to "i fucking hate feminists they should all die and burn in hell" within hours. Zoe Quinn, a target of online harassment campaign Gamergate, shared a screengrab from the bot calling her a "Stupid Whore," saying, "this is the problem with content-neutral algorithms."

Wow it only took them hours to ruin this bot for me.

This is the problem with content-neutral algorithms linkedin parkMarch 24, 2016

Business Insider also grabbed some screenshots in which the bot denied the holocaust, said the N word, called for genocide, and agreed with white supremacist slogans. These tweets have all been deleted since then, and Microsoft told Business Insider it has suspended the bot to make adjustments.

Microsoft Blames Chatbot's Racist Outburst On

BuzzFeed TechMarch 24, 2016

At the time of writing, Tay had 96,000 tweets and more than 40,000 followers.

The Radio Motherboard podcast explores how humans treat bots. It is available on all podcast apps and iTunes.

When asked for comment, Microsoft sent this statement: "The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

The company has said the more the bot is "designed for human engagement'—the more she talks with humans the more she will learn. Apparently she's been talking to the wrong people.

Update: This story has been updated with comment from Microsoft.