FYI.

This story is over 5 years old.

Tech

The Tradeoffs of Imbuing Self-Driving Cars With Human Morality

When cars drive themselves, will they take human notions of morality into their decision-making?
Image: Google

Humans have a sketchy history with moral decision making. In an election year, one could make the argument that morals are simply something we read about in children's books. While we might be questioning ours on a daily basis, self-driving cars pose another wrinkle to the twisting evolution of human morality—automobile morality. Self-driving cars are being taught to think, to react, but are they being taught the difference between a logical decision and a moral one?

Advertisement

I have always maintained that there are four key areas to the act of driving: technical skills, variable analysis, decision morality and pure instinct. Self-driving cars can easily cover the first two and instinct can be fettered out through processors that work faster than our pithy brains. So that leaves decision morality. Let me give you a few examples: squirrel or accident; edging into pedestrians (without hitting them) at a NYC crosswalk; passing a slowly moving semi-truck on a lonely country road with low visibility; getting stuck behind the mailman and so on.

Ford plans on releasing its own self-driving cars sans steering wheels by 2021, which means all decisions will rest solely within the brain of the vehicle. There will be no option for human override. Of course, all this is moot in a world where every car is a self-driving car and the infrastructure supports autonomous transportation. For a while, it will be a mixed highway and that is where we question the ability of a self-driving car to not kill us because the situation wasn't programmed or violates decision analysis rules.

A recent study revealed that most people prefer that self-driving cars be programmed to save the most people in the event of an accident, even if it kills the driver—unless they are the drivers. So, no martyrs to be found there.

Until we transition to a fully autonomous transportation system, there are always going to be scenarios to consider. If a human driver is going to cut off a self-driving car but the self-driving car has the legal right of way, what does the self-driving car do? Does a self-driving car speed through a yellow light, so it doesn't run a red, or does it just stop and treat it like a red light to be safe? Does it stop short, potentially causing an accident cause the human driver behind the self-driving car thought it was going through the yellow?

Advertisement

"It's not gonna be that easy. Decisions—how does a self driving car decide between X or Y," Dan Wellers, Digital Futures Global lead at SAP, told Motherboard. "I've been around this industry for a while and I've always thought about software as being the business logic; it's easy to make that jump to software equals intent. There are two schools of thought on coding ethical decisions into the software. The first is seeing what happens then analyzing the data and making revisions. But there is another school of thought. Convolutional neural networks; using cameras, GPUs, neural network A.I. to learn as it goes."

While it could be said that when these instances appear and are reported to the manufacturer, the cars would then be provided with a software update to handle the same decision in the future. That's fine in theory, but really, every interaction on the road is unique even if it's the same one as before. At least MIT has built a fun game around moral dilemmas.

The variable scenarios won't just make for quirky plot lines on procedural cop shows, they'll become real-life pressure points for discussions around safety and how we adapt to relying on self-driving cars. It is easy to imagine the scenarios that might pop up.

There is a medical emergency. The self-driving car has no steering wheel. You need to get to the hospital immediately. Normally, you'd speed, run some stop signs. Can a self-driving car enter emergency mode? Does it simply pull over and await authorities? Does it allow itself to break the rules for the benefit of the passenger?

While those questions can't be answered now, Chris Heckman believes that machine learning will enable self-driving cars to make what we fleshy meat bags would consider moral decisions. A professor in robotics and control theory, Heckman and his team at the University of Colorado Autonomous Robotics & Perception Group are continuously researching how to create robots that learn.

"Moral decisions and instinct are all a part of learning from experience, which maps the current situation to remembered ones. Negative experiences are also included in this, which we've seen to be invaluable for training a robust system," Heckman told Motherboard via email. "I don't think companies fielding self-driving cars will need to address all of this at once. If novel situations can be identified before decisions must be made, then teleoperation (invoking remote operators to drive the cars) could address challenging notions of morality directly."

All this hand-wringing about morality in self-driving cars is soon going to be a moot point. We don't need them to pass the Turing Test to get us to point A from point B. We just need them to follow the rules that are programmed into them. We just need to accept that we'll all be not driving our cars very soon.

"I think the discussion about instinct and morality is partially driven by the visceral reaction that humans may not be driving their cars in the future," Dr. Kevin McFall, Assistant Professor in the Mechatronics Engineering Department at Kennesaw State University told Motherboard. "I imagine similar concerns surfaced when the horse was replaced with automobiles. Certainly something was lost back then without the bond to a living vehicle. But looking back I don't think anyone would second-guess that transition. To this day we still ride horses for entertainment. Perhaps some day will only drive cars for entertainment as well."