FYI.

This story is over 5 years old.

Tech

Robots Are Learning How to Say 'No' to Humans

Science fiction is filled with disobedient robots, and now we’re creating them in reality.
Image: Screenshot of Tufts HRI experiments.

Science fiction is filled with examples of robots declining the requests of their human companions. "I'm sorry Dave, I'm afraid I can't do that," says the HAL 9000. "It's against my programming to impersonate a deity," insists C3PO. The Terminator, ever a robot of few words, simply goes with "negative." And let's not even get into all the ways Bender rejects authority.

As it turns out, robots in the real world are taking a hint from their fictional counterparts and learning the power of the word "no." Researchers based at the Human-Robot Interaction Lab run by Tufts University have been teaching robots how to disobey direct orders. See for yourself in the below video starring a Nao robot named Shafer.

Advertisement

Video: HRI Laboratory at Tufts University/YouTube

This short video begins with Tufts roboticist Gordon Briggs voicing commands to sit and stand up, with which Shafer politely complies. But when Briggs tells the robot to walk over a precipitous ledge, Shafer has concerns. "But…it is unsafe," it points out, with a level of earnest confusion that is equal parts adorable and heart-wrenching.

Briggs assures the robot that he will step in and catch it as it falls, and Shafer hesitantly complies with the order. I was honestly frightened that Briggs would actually let it fall, which I assume would irrevocably sever human-robot relationships for all time. Fortunately, it doesn't come to that. Though he allows himself a small "ouch," Shafer is rewarded for trusting Briggs.

Therein lies the upshot of these exercises—to teach robots that it is acceptable to reject an order, provided they can follow it up with a good excuse. Take, for instance, another one of the lab's filmed experiments, in which a robot named Dempster refuses to walk into an obstacle without the proper clearance.

Video: HRI Laboratory at Tufts University/YouTube

Interestingly, when Shafer is run through the same exercise, Briggs handles the conversation differently. Instead of requesting that the robot turn off its obstacle detection, he gives Shafer useful information about the obstacle's durability. As with the ledge, Shafer absorbs the new intel, and then complies with the order.

Video: HRI Laboratory at Tufts University/YouTube

As ominous as it may seem to be teaching robots how to decline orders, these efforts are a critical part of developing artificial reasoning skills. After all, if robots are to be expected to serve people, they had better get used to dealing with all the confusing and contradictory nonsense that goes hand in hand with human judgment.