FYI.

This story is over 5 years old.

Tech

People Want Autonomous Cars to Protect the Greater Good, But Only After Them

We want other people's cars to sacrifice their passenger if it means saving more lives, but we wouldn't buy one ourselves.
Image: Iyad Rahman

An autonomous vehicle is carrying its passenger straight towards a group of pedestrians. It can either hit and kill the pedestrians, or swerve into a wall and kill its own passenger. What should it do?

The (hopefully obvious) moral choice would be for the car to sacrifice its own passenger to save the greater number of people; it's for the greater good. A new study published in Science found that most respondents approve of such a "utilitarian" autonomous vehicle, at least in theory. The thing is, when their own life is in the balance, those same people would rather buy an autonomous vehicle that saves the passenger.

Advertisement

"Across all of the studies, there was a consensus; people recognised what the moral thing to do is in this situation," said Azim Shariff, a psychologist at the University of Oregon and one of the authors of the new paper, in a phone call. "They just don't want to be paying money and putting themselves in a car that could decide to sacrifice them—even if to sacrifice them is the moral thing."

Shariff and his colleagues, Jean-François Bonnefon from the University of Toulouse in France and Iyad Rahwan from the MIT Media Lab, posed variations of the above quandary to almost 2,000 US participants over Amazon's Mechanical Turk platform. The hypothetical scenario is a modified version of the well-known "trolley problem," an ethical dilemma that asks whether a human should divert a runaway train to kill one person rather than the five it's headed toward.

Across their surveys, people overwhelmingly thought that it would be more moral for an autonomous vehicle to sacrifice its passenger to save more people, and that cars should be programmed to minimise injury to the many, rather than to the few. Even when asked to imagine that they or a family member were the car's passenger, the majority agreed this was still the most moral course.

But another question reveals the dilemma: When asked how likely they were to buy a "utilitarian" autonomous vehicle that could sacrifice them or their family for the lives of others, or a "self-protective" model that would always put their lives first, people indicated they'd be significantly more likely to go for the latter. "In other words, even though participants still agreed that utilitarian AVs [autonomous vehicles] were the most moral, they preferred the self-protective model for themselves," the researchers write.

Advertisement

Basically, we're selfish and we don't want to die.

"We're pretty confident that Tesla and Google and all those guys are running their own private focus groups to deal with these exact moral situations"

If the market is left to its own devices, presumably car makers would be inclined to program their cars to protect their passengers—reflecting customer demand, but making everyone less safe. Shariff compared the situation to a "tragedy of the commons"—we'd all be better off if everyone did the most universally helpful thing, but because some won't, we're all worse off.

The authors considered government regulation as a solution to the problem. But this might not be an answer: They found that people said they were much more likely to buy an unregulated autonomous vehicle than one that was regulated in this way. Why does that matter? Because a world of autonomous vehicles is generally accepted to be safer than one without, and so delaying the adoption of this technology through unpopular regulation could actually put more lives at risk. Any kind of autonomous vehicle is an improvement on the current traffic accident situation.

It may all be an interesting thought experiment, but it's not just hypothetical—Shariff pointed out that, given the number of hours driven, autonomous cars are likely to face even very rare incidents at some point. And even if they don't, they need to be programmed to deal with them, just in case.

"We're pretty confident that Tesla and Google and all those guys are running their own private focus groups to deal with these exact moral situations, but we think this is a conversation that should happen in public; it's something that people should weigh in on and should be educated about," said Shariff.

He added that attitudes might change as we get more comfortable with the technology; the moral issues around autonomous cars may well develop alongside their technical challenges.