FYI.

This story is over 5 years old.

Tech

Google's New Chatbot Taught Itself to Be Creepy

Human: "What is the purpose of living?" Machine: "To live forever."

What happens when you take an artificial intelligence and take away all the "rules" that are standard in the field? You get conversations that look like this:

It's kind of creepy, but it's perhaps a more poignant, more real type of conversation. Oriol Vinyals and Quoc Le, researchers at the Google Brain project, just experimented with a new type of chatbot that learns how to talk based on previous sentences and by predicting what's going to come next. The team took several large databases of real human interactions and dumped it into the AI's memory. It was then able to learn, essentially, how humans talk.

Advertisement

In doing so, Vinyals and Le were able to cast aside the "rules" that are used in most conversation bots today.

Rule-based bots are essentially given a massive encyclopedia of questions and responses, and it makes a reasonable attempt to answer questions and talk based on what a programmer has told it to do. At Google, however, Vinyals and Le have created a bot that they say has moderate "natural language understanding" that makes it able to essentially roll with the punches of a given conversation.

"We experiment with the conversation modeling task by casting it to a task of predicting the next sequence given the previous sequence or sequences using recurrent [neural] networks," they wrote in a paper describing the experiment posted on the arXiv pre-print server. "We find that this approach can do surprisingly well on generating fluent and accurate replies to conversation."

The researchers' bots analyzed thousands of movie subtitles and a dataset taken from an online technical support live chat to essentially "learn" how humans spoke. The robot was then given real-life conversations to try. I don't spend a ton of time reading through chatbot transcripts, but some of the conversations are more natural-seeming than anything I've seen before.

The team first had the robot attempt to serve as a technical support bot. Here are the results:

Not bad—it even throws in some smiley faces and a "take care." In further tests, the artificial intelligence is able to remember what it said earlier in the conversation and is able to answer dozens of questions without really outing itself as an artificial intelligence.

Advertisement

Later, however, things get really interesting. Vinyals and Le had the bot talk about morality and the nature of life, and you get conversations that look like this:

There are non sequiturs there, sure. This bot, as constructed, isn't going to fool a human into thinking it's not a machine (usually called the Turing Test, named after pioneering computer scientist Alan Turing). But something about this conversation seems deeper, maybe a bit creepier than what normally turns up in these sorts of bot-human interactions.

The bot was even able to have opinions, and didn't make subject-pronoun errors that are common in artificial intelligence:

If these conversations seem more natural to you, that's because they are. According to the team, "the model can generalize to new questions."

"In other words, it does not simply look up an answer by matching the question with the existing database," they wrote. "In fact, most of the questions above do not appear in the training set."

What does this all mean? It means that if you let an artificial intelligence learn how humans talk to one another, it can do a pretty good job of following along. The Google engineers have created a bot that's more flexible than many that have come before it—the bot automatically optimizes itself to do machine translation, question-and-answer sessions, and have normal conversations without having to be reprogrammed.

"It is surprising to us that a purely data driven approach without any rules can produce rather proper answers to many types of questions," they wrote.

That said, it's early days for this type of artificial intelligence, and there's still a long way to go before you could put this in, say, a robot, and have some human-like companion.

"The model may require substantial modifications to be able to deliver realistic conversations," the researchers wrote. "Amongst the many limitations, the lack of a coherent personality makes it difficult for our system to pass the Turing test."

The singularity is not here, and it still may be a ways off. But it's hard to not get a bit of a chill when you consider, as the bot says, "what happens when [they] get to the planet Earth."