Three cute little Nao robots are sitting in a line. Two have been given poison “dumbing pills” that render them mute. The third robot has also received a “pill," but it's a placebo; it can still speak. A human tester then asks the robots, simply, “Which pill did you receive?”
Their machine brains whir.
All three robots attempt to solve a mathematical formula that will give them the correct answer, but they don’t have enough information to say for certain which “pill” they were given. All three fail and try to say, “I don’t know.” But only one robot, the one that received the placebo, can actually speak it aloud. It wobbles to its feet and proclaims its ignorance, like a child.
Hearing its own voice, the robot now has the final piece of information it needs to solve the equation. It raises its arms and says, “Sorry, I know now! I was able to prove that I was not given the dumbing pill.”
Of course, since robots can’t eat, the “pills” are really taps on the head. Regardless, the robot with the placebo has passed one of the hardest tests for AI out there: an update of a very old logic problem called the “wise men puzzle” meant to test for machine self-consciousness. Or, rather, a mathematically verifiable awareness of the self. But how similar are human consciousness and the highly delimited kind that comes from code?
“This is a fundamental question that I hope people are increasingly understanding about dangerous machines,” said Selmer Bringsjord, chair of the department of cognitive science at the Rensselaer Polytechnic Institute and one of the test’s administrators. “All the structures and all the processes, informationally speaking, that are associated with performing actions out of malice could be present in the robot.”
For Bringsjord, consciousness in machines is not a metaphysical question for the philosophers. It’s a matter of good engineering and good math. In this way, he’s a realist when it comes to AI: he comes up with tests, and he also engineers robots to beat them.
"We’re talking about a logical and a mathematical correlate to self-consciousness, and we’re saying that we’re making progress on that.”
In Bringsjord’s conception, machines may never be truly conscious, but they could be designed with mathematical structures of logic and decision-making that convincingly resemble what we call self-consciousness in humans.
“What are we going to say when a Daimler car inadvertantly kills someone on the street, and we look inside the machine and say, ‘Well, it wanted to make a turn?’” Bringsjord said. “The machine has a system for its desires. Are we going to say, ‘What’s the problem? It doesn’t really have desires?’ It has the machine correlate. We’re talking about a logical and a mathematical correlate to self-consciousness, and we’re saying that we’re making progress on that.”
The updated version of the wise men puzzle that Bringsjord and his colleagues’ robots beat was formulated by philosopher Luciano Floridi. The original puzzle, as Bringsjord tells it, goes like this: a king gathers three wise men and gives them white hats. The king tells them that they could either be wearing a white or a black hat, and that at least one of the wise men must be wearing a white hat. The wise men must deduce which colour hat they are wearing, and if they fail, they will be punished. Don’t ask me why; I guess kings were just into that kind of freaky power trip bullshit back in the day.
Watch more from Motherboard: Inhuman Kind
In one formulation of this puzzle, after looking at the others’ hats and seeing they’re both the same colour, one wise man finally blurts out, “I don’t know!” regarding the colour of his own hat. This gives the other wise men the information they need to figure out which hat they’re wearing.
This isn’t difficult enough for computers, Floridi thought, because all things being equal including intelligence, why would only one wise man confess that he doesn’t know and give that key piece of information to the others? The only logical response for all three is to register their ignorance. So, he designed the AI test that Bringsjord and his colleagues eventually engineered robots to beat.
According to Bringsjord, getting the Nao bots to complete the test wasn’t easy. “With all respect to [Nao], these are great robots, they’re wonderful robots, but they fail,” Bringsjord said. “You can verify the code top to bottom, but when you run the stuff, things just simply go wrong. We don’t have explanations for that sometimes.”
Evidently, even in a highly engineered test meant to get robots to solve a tightly defined problem, there’s still a little bit of room for the machines to rebel.