FYI.

This story is over 5 years old.

Tech

Microsoft Apologizes for Creating a Teenage, Racist, Homophobic Chatbot

"We are deeply sorry."
Image: Microsoft

In a blog post posted Friday, Microsoft executive Peter Lee apologized for the disturbing behavior of its chatbot, Tay. Originally crafted in order to "experiment with and conduct research on conversational understanding," Tay's chats quickly took a darker turn. In less than 24 hours, Tay turned from a nerdy attempt at reaching teens, into the racist, Holocaust-denying, Hitler-loving AI of all our nightmares.

Advertisement

c u soon humans need sleep now so many conversations today thx

TayTweetsMarch 24, 2016

Lee wrote that his company is "deeply sorry for the unintended offensive and hurtful tweets from Tay." He then tried to explain how Microsoft seemed to completely miss the fact that Twitter is often a hub for abusive language, citing a similar Microsoft chatbot, the Chinese XiaoIce, which never experienced the kind of abuse that plagued Tay.

Microsoft apparently "prepared for many types of abuses of the system," yet its engineers still "made a critical oversight of this specific attack." What's confusing here is what other kinds of attacks Microsoft could be referring to. When it comes to Twitter, most times the overwhelmingly pressing issue is abuse, therefore it seems strange that Microsoft didn't have any tools in place to keep Tay from becoming just like the trolls who tormented her.

Towards the end of the post, Lee makes a good point: "To do AI right, one needs to iterate with many people and often in public forums." If we want to stand a chance at making an AI free from biases, we have to include everyone in the conversation.

That's the easy part. What's hard, is deciding how to make sure the bots we create don't merely mirror the most hateful viewpoints of humanity. When it learns to tackle this problem, Microsoft will bring Tay back, Lee said.