FYI.

This story is over 5 years old.

Tech

A Robot Made Me Do It

Studies suggest we will obey our future robot bosses. But what if a machine asked us to perform a morally objectionable task? Would we?
Photo: Her/Annapurna

Now, robots are everywhere, seducing us on screen and through screens, stealing our jobs, saving us from our jobs, suggesting to us what to listen to or watch or buy next. A new study indicates we're even ready to be bossed around by them.

That might not sound completely surprising; if we spend anytime online, we're already being nudged by automated systems and recommendation engines all the time. But the study raises interesting questions about the kinds of demands a robot might make on a human—even when it's the size of a small child—and how far we might be swayed when asked by robots to do things we might not normally do.

Advertisement

Researchers at the Human-Computer Interaction Lab at the University of Manitoba set up a simple experiment: They asked subjects to perform a series of repetitive tasks, like manually renaming batches of hundreds, then thousands of image file extensions from ‘.jpg’ to ‘.png.’ They were allowed to quit at any time, but if they protested, which they often did, an actor named Jim would prod them to continue.

In half of the trials, Jim was a 27-year-old male wearing a lab coat; in the other half, Jim was a knee-high humanoid robot with a high-pitched voice. Robot Jim's success rate at getting his "employees" to keep working was just 46 percent, far lower than Human Jim’s 86 percent. But the findings point to some of the ways in which humans do obey robots, and how they might. (You can witness for yourself how influences from the different Jims varied in this video from the lab—which went to the trouble of adding creepy music and an ominous font.)

Like a robot boss. Video: Human-Computer Interaction Lab at the University of Manitoba

The research, published in the Proceedings of the First International Conference on Human-Agent Interaction, may be one of the first obedience studies to combine humans and robots like this. In the paper, Derek Cormier and his co-authors lean toward larger questions by alluding to the granddaddy of obedience studies, psychologist Stanley Milgram's series of infamously ethically dicey obedience experiments conducted during the 1960s.

Milgram, who wanted to determine what compels ordinary people to do terrible things, was provoked by the trial of Nazi general and Holocaust organizer Adolph Eichmann, during which he claimed he was just doing what he was told to do.

Advertisement

Milgram’s participants were assigned student/teacher roles, and ‘teachers’ were told to administer incrementally stronger electric shocks—up to 450 watts—to a ‘student,’ unseen but aurally present, if they answered the teacher’s question incorrectly. The student, in reality, was an actor and wasn’t harmed, but would feign pain by screaming according to the degrees of perceived voltage. The voltage options were listed as “slight shock,” “moderate shock,” “danger: severe shock,” and simply “XXX.”

If a teacher refused to deliver the voltage, the researcher in the room would give stern encouragement to proceed. Horrifically, 65 percent of the teachers went all the way, despite visible frustration and despair. Despite several cases of emotional trauma, in one trial, 85 percent of the participants were fully obedient, administering potentially lethal shocks. The experiment was recreated and aired by the BBC in 2009, and it’s just as disturbing to watch as you’d imagine:

What do these two experiments imply about the future of giving and taking orders? Robot Jim was not an autonomous AI, as Her imagined it; his speech and gestures were controlled from another room, an aspect that some of the study's participants realized.

Still, the humans in the study behaved differently around Robot Jim. His novelty and non-threatening presence loosened mouths during an informal question-and-answer session (“Can you dance?" and "What’s your favorite movie?” were two questions humans asked the robot, which responded intelligently before pressing the humans back to work). And when working with Robot Jim, humans tended to complain more crassly—sometimes with expletives—than they did in front of Human Jim.

Advertisement

Yet 76 percent found their robot friend to be a legitimate authority, even if they found it difficult to express exactly why. And after the robot threatened to end a study when the bored protested, “several appeared nervous or guilty when the robot said it was notifying the lead researcher that the experiment was over.” “No! Don’t tell him that! Jim, I didn’t mean that…I’m sorry. I didn’t want to stop the research,” one participant said. Your mother isn't the only one capable of a successful guilt trip, it seems.

Now what?

It isn't so difficult to imagine, say, a more charismatic, lifelike robo-boss overseeing a group of humans working in an Amazon shipping warehouse (until they too are replaced by robots). And as Motherboard's Brian Merchant wrote last month, about a set of robot traffic police in Kinshasa, "automated machines designed to facilitate compliance to mundane tasks are precisely the sort we're already obeying—automatic toll collectors, supermarket checkout computers, and the automated voice commands in public transit."

But what role could Robot Jim play in other, more complicated scenarios? What if a robot played the role of the researcher in Milgram's famous experiment, nudging humans to inflict pain? What if a robot who happens to become a boss, a military field commander, a middle manager or a caretaker asks us, for whatever reason, to perform a morally objectionable task? Would we?

Advertisement

As we head towards a future of technological integration in everyday life, warns the HCI team, “it is crucial that researchers consider how computationally-advanced and information-rich autonomous robots will be seen as authority figures, and investigate people’s responses when given commands or pressured by such robots.” Considering the way that ordinary people can be compelled to do things that contradict their morals, they write, "there is a real danger which must be addressed by the HRI community."

Robot Jim is hardly our idea of an intimidating robot overlord, but appearance may not matter. In 2012, psychologists Alex Haslam and Steven Reicher argued that good people are more likely to do bad things if they're following authority figures whom they identify with, and not simply, as Milgram's experiment would have it, because someone or something tells us to.

In 1968, ELIZA's questions were so convincing that early testers thought they were typing with a real psychoanalyst and spent hours with her attempting to resolve emotional issues.

So why, then, would we listen to a robot? Putting aside even a robot's visual appearance, some studies have shown that we have a capacity to place trust in computers, as long as we are convinced we are interacting with a conscious being. In the mid 1960s, MIT computer scientist Joseph Weizenbaum developed ELIZA, an early natural language processor that was envisioned as a kind of parody of a non-directional psychoanalyst, leading patients on with a stream of questions based on their last answer.

Advertisement

ELIZA appeared only as text on a screen, but her questions were so convincing that early testers thought they were typing with a real psychoanalyst and spent hours with her attempting to resolve emotional issues. (You can play with her here.)

We care for robots; what about them?

Even as our familiarity with robots has grown significantly since the '60s, the rules of reciprocity—you scratch my back, I’ll scratch yours-—still stand. We’re just as likely to perceive robots as social beings, and are likely to help a robot we think is intelligent, as long as they help us first.

In one study, 90 percent of children—granted, significantly less jaded than most of us humans—believed it was “unfair” to put a robot they had socialized with into a dark closet, though only 50 percent found it morally wrong. And soon, our metal friends will be able to see the pain in kids' eyes, and so could be able to respond convincingly. An article published in Current Biology this week says that machine vision systems have advanced to the point where, in a study, they correctly discerned a human’s pained emotions through facial expressions 85 percent of the time—far better than humans' own measly record in pain-spotting, 51 percent.

AI still has a long way to go before it comes close to matching humans in intelligence and emotion. But our machines are compelling and they are already compelling us to do things, too. Last year, researchers at USC's Institute for Creative Technologies unveiled a virtual reality ELIZA-type therapist called Ellie as a new ‘face’ to help stubborn veterans address their problems post-service. This trust in machine extends to robotic medical professionals, too; in another study, researchers found that a patient would go as far as to stick a thermometer in his anus for ten seconds because a humanlike robot asked him to (yes, there's video).

It's an indication that robots can be programmed to be convincing enough to nudge us for our own good. But without any idea of morality themselves, their notion of what's good also isn't the same as ours.

Point is—and this is only advice coming from a human—consider being less of a pushover starting now.