FYI.

This story is over 5 years old.

Tech

Forget Turing, the Lovelace Test Has a Better Shot at Spotting AI

To pass the Lovelace Test a machine has to create something original, that it wasn't programmed to do.
Image: Shutterstock

When a chatbot called Eugene Goostman passed Alan Turing’s famous measure of machine intelligence in June by posing as a Ukrainian teenager with questionable language skills, the world went nuts for about an hour before realizing that the bot, far from having achieved human-level intelligence, was actually pretty dumb.

Clearly, something is amiss here. If the Turing Test can be fooled by common trickery, it’s time to consider we need a new standard. Enter the Lovelace Test.

Advertisement

“This is unfortunate. I’m a huge fan of Turing, but his test is indeed inadequate,” Selmer Bringsjord, one of the designers of the Lovelace Test, a more rigorous AI detector, told me in an interview.

The nature of the Turing Test is that it pits a human interlocutor against a computer program. The machine basically has to trick the human into thinking it’s a person, which essentially entails a human-to-human match of wits. The programmer only has to build a program that can fool an opponent into thinking its intelligent. In Goostman’s case, giving the bot a young age and foreign nationality played into the manipulation.

What’s more, to be effective, chatbots designed to pass the Turing Test only have to mimic basic language skills rather than demonstrate genuine machine intelligence—ordering words and phrases in a convincing way, without knowing what they mean.

Until a machine can originate an idea that it wasn’t designed to, Lovelace argued, it can’t be considered intelligent in the same way humans are.​

The Lovelace Test is designed to be more rigorous, testing for true machine cognition. It was designed in the early 2000s by Bringsjord and a team of computer scientists that included David Ferrucci, who later went on to develop Jeopardy-winning Watson computer for IBM. They named it after Ada Lovelace, often described as the world's first computer programmer.

The Lovelace Test removes the potential for manipulation on the part of the program or its designers and tests for genuine autonomous intelligence—human-like creativity and origination—instead of simply manipulating syntax.

Advertisement

An artificial agent, designed by a human, passes the test only if it originates a “program” that it was not engineered to produce. The outputting of the new program—it could be an idea, a novel, a piece of music, anything—can’t be a hardware fluke, and it must be the result of processes the artificial agent can reproduce. Now here’s the kicker: The agent’s designers must not be able to explain how their original code led to this new program.

In short, to pass the Lovelace Test a computer has to create something original, all by itself.

In 1843, Lovelace wrote that computers can never be as intelligent as humans because, simply, they can only do what we program them to do. Until a machine can originate an idea that it wasn’t designed to, Lovelace argued, it can’t be considered intelligent in the same way humans are.

“We all know that the human engineers know exactly what to expect from their system,” Bringsjord said. “There might be a small bit of surprise, but basically the engineers know exactly what to expect and are completely unsurprised, because it’s all mechanical.”

At this point, it’s hard to imagine how a computer could ever pass the Lovelace Test. So far, one of the most lauded achievements in machine learning is Google’s Artificial Neural Network that taught itself to recognize a cat. It’s an impressive feat, but it’s light years away from the kind of creative intelligence required to match human intellect.

Advertisement

But from Bringsjord’s perspective, the fact that the Lovelace Test may never be passed is exactly the point. It’s meant to put AI development in perspective.

“If you do really think that free will of the most self-determining, truly autonomous sort is part and parcel of intelligence, it is extremely hard to see how machines are ever going to manage that,” Bringsjord told me.

Even the most advanced self-learning neural network can only perform tasks that are first mathematized and turned into code. So far, essentially human functions like creativity, empathy and shared understanding—what is known as social cognition—have proved resistant to mathematical formalization.

“Even for people who believe in the Singularity, the first notch in machine evolution is bringing machines to our level. Which means we have to figure out how to render some remarkable things that don’t seem to be formal, formal,” Bringsjord explained.

“We can’t seem to be able to mathematize creativity, and sensitivity to the cultural subjectivity of a newspaper article or a novel or short story—it seems very hard to do that,” he said.

Although he is adamant in his belief that artificial intelligence will never match that of humans, Bringsjord is optimistic when it comes to machine learning that aims a little lower than complete consciousness.

“I have no small amount of optimism about what machines that fall short of human intelligence, but leverage brilliant human engineering, can accomplish. In this regard, I think there are very few limits within those constraints,” he said. In other words, perhaps we should focus on the more practical applications of limited artificial intelligence, like self-driving cars, instead of the fantastical pursuit of a machine that can think, feel, and create just like us.