Turns out our implicit assumption that robots know better than we do could put us in danger. A study by researchers at Georgia Tech Research Institute showed that in a simulated emergency, people were willing to override their own common sense to follow a robot to safety.
In the experiment, students were seated in a waiting room and instructed to fill out a form. Eventually, the hall outside the room filled with smoke and an alarm sounded. The student then had to decide whether to leave the way they came, or follow a robot indicating a previously unknown route.
The sample size of the study was small, with only 30 participants, but the results surprised the researchers. Twenty-six of the 30 students who participated in the study chose to follow the robot, despite the presence of a clearly marked exit that was already known to them.
Graduate student Paul Robinette, who led the study, expected altogether different results. As he told New Scientist, “We thought that there wouldn’t be enough trust, and that we’d have to do something to prove the robot was trustworthy.” But even if the robot made an error and led students into a conference room first, people continued to follow its directions until it led them to an exit.
But the strength of the results may have been partly attributable to the fact that the robot was wearing a sign designating it as an “emergency response robot.” Several participants cited this as a reason for their trust in an exit survey.
The study is the first to examine trust between robots and humans in an emergency setting, and is only part of a planned suite of research to come on the subject. But senior research engineer Alan Wagner says the Georgia Tech team might have to change tacks after these results. “We wanted to ask the question about whether people would be willing to trust these rescue robots,” said Wagner. “A more important question now might be to ask how to prevent them from trusting these robots too much.”