FYI.

This story is over 5 years old.

Tech

Will Robots Be Able to Help Us Die?

Unusual ethical questions such as this one are why the Open Roboethics Initiative exists.
​Image: ​i k o/Flickr

The robot stares down at the sickly old woman from its perch above her home care bed. She winces in pain and tries yet again to devise a string of commands that might trick the machine into handing her the small orange bottle just a few tantalizing feet away. But the robot is a specialized care machine on loan from the hospital. It regards her impartially from behind a friendly plastic face, assessing her needs while ignoring her wants.

Advertisement

If only she'd had a child, she thinks for the thousandth time, maybe then there'd be someone left to help her kill herself.

Hypothetical scenarios such as this inspired a small team of Canadian and Italian researchers to form the ​Open Roboethics Initiative (ORi). Based primarily out of the University of British Columbia (UBC), the organization is just over two years old. The idea is not that robotics experts know the correct path for a robot to take at every robo-ethical crossroad—but rather, that robotics experts do not.

"Ethics is an emergent property of a society," said ORi board member and UBC professor Mike Van der Loos. "It needs to be studied to be understood."

To that end, ORi has been conducting a series of online polls to determine what people think about robot ethics, and where they disagree. Could it ever be ethical to replace a senior citizen's human contact with mostly robot contact? Should a minor be able to ride a self-driving car alone? Should a robot be allowed to stop a burglar from breaking into its owners' otherwise empty home? The team hopes that by collecting the full spectrum of public opinion on these kinds of questions, they can help society avoid any foreseeable pitfalls.

Privacy "isn't just about a robot collecting and relaying information to third parties, but also about how it feels to have a social robot around you all the time."

Unsurprisingly, ORi's earliest polls probed how people might want self-driving cars of the future to prioritize safety on the road in the event of a crash. When asked whether a car should favour its own passengers' safety, or the overall minimization of risk, respondents were remarkably split. And when given the choice between killing its own passenger or a pedestrian child, two thirds of respondents thought a self-driving car should choose to kill the child.

Advertisement

"I bought the car," reasoned one anonymous participant. "It should protect me."

But not all robo-ethical quandaries have to involve life-or-death situations to be divisive and complex. In a poll about appropriate designs for bathing assistance robots, complete autonomy was associated with both a loss of user control for some users, and a gain in user privacy for others. In other words, a robot bath might not be as comfy as one from a human, but for many people that's a worthwhile trade to eliminate all human eyes from the bathroom.

University of Washington law professor Ryan Calo told Motherboard that such privacy issues could be even touchier than ORi's data suggests. Research has shown that the presence of a humanoid robot can have some of the same effects as the presence of a human companion. Privacy, he said, "isn't just about a robot collecting and relaying information to third parties, but also about how it feels to have a social robot around you all the time."

"How do you deal with liability when you have a physical entity that can cause physical harm, running third party code?"

The context of a situation can dramatically influence a person's idea of which principle ought to direct a robot's actions, too. Public health concerns might take precedence when deciding not to serve a drink to a diagnosed alcoholic, but personal liberty might be more important when handing a cheeseburger to a person with high blood pressure.

Advertisement

In fact, the team has found that people's reactions can vary widely based on fairly subtle variations in the moral structure of situations. "There are key issues for different people," said ORi executive and UBC Masters student Shalaleh Rismani. "They can flip a situation completely. Ownership [of the robot] is big, but so is privacy and control… and safety."

Calo, for instance, has written about the potential ethical problems posed by open-source development of robot software—in other words, the ability for anyone to modify and contribute to a commercial robot's code. "How do you deal with liability when you have a physical entity that can cause physical harm, running third party code?" Calo said.

The experts at ORi are the first to admit that polling is just a first step toward sensible robot policy. But they think it's a necessary if we are to avoid, as Calo put it, "one or two engineers just deciding what they think"—a future where a revolutionary tech product's most challenging ethical questions don't arise until it's ready to hit the market. From robo-assisted suicide to commercial drone use in urban areas, robots will create new ethical quandaries at precisely the same rate that engineers imbue them with new abilities.

ORi still has a great deal of work ahead of it. Today's high-tech industry still gives most robot ethics questions only passing attention, and funding has been hard to come by. And despite the intriguing nature of their questions some polls still suffer from having too few participants. ORi wants to apply for funding through Elon Musk's Future of Life project, but for now it seems the most realistic candidates for support will remain such explicitly futurist organizations.

"The issues won't stop," Rismani insisted. "We want to keep going with this until there are no more questions left to ask."