FYI.

This story is over 5 years old.

Tech

Why Google’s Self-Driving Car Crash Doesn’t Change Anything

The crash got plenty of media attention, but don’t expect major changes in the development of self-driving cars.
A Lexus model Google self-driving car. Image: Getty Images

Google is blaming a "tricky set of circumstances" for what's believed to be the first accident caused by one of its self-driving cars.

The accident took place on February 14 and involved a self-driving Lexus and a public transit bus in Mountain View, Calif. According to Google, the self-driving Lexus was preparing to make a right-hand turn, hugging the curb in the process, when it veered to the left to avoid sandbags blocking a storm drain. In veering to the left, the self-driving Lexus struck the bus at a speed of around 2 miles per hour.

Advertisement

"Accidents with any system that involves human beings, either as drivers or designers, are inevitable. It's just a question of how much."

While Google told the California Department of Motor Vehicles that it has already examined data from the crash to "improve an important skill for navigating similar roads," the incident isn't necessarily a cause for alarm.

"It's a historic event in a way, but it's not the last time it's going to happen," said Michael Froomkin, a University of Miami School of Law professor who co-edited an upcoming book on the intersection of robotics and law called Robot Law. "Accidents with any system that involves human beings, either as drivers or designers, are inevitable. It's just a question of how much."

Much has been made of the fact that the self-driving car assumed the bus would allow it to enter the lane, thus causing the accident. But to Froomkin, criticizing the self-driving car for making this assumption undermines the promise of self-driving cars.

"We all make assumptions about other cars on the road—we assume they're going to stay in their lanes," he said. "But what's the alternative? Are you telling me that no human drivers had fender benders in the city all week?"

At this point I tried to probe a little deeper on this idea of self-driving cars making assumptions on the behalf of their passenger: Would this create an opportunity for car makers to stand out from each other? In the future, would it be wise for, say, BMW to say that its self-driving cars are more aggressive than Honda's, and therefore will get you from Point A to Point B faster as a result?

Advertisement

Don't count on it.

"That would be a pretty dangerous strategy from a liability point of view," said Froomkin. "Ads [touting] that would be used against you in court."

Still, while Froomkin quite rightly stressed that human drivers aren't perfect, not everyone Motherboard spoke with was so quick to give Google the benefit of the doubt following the accident.

"This shows we're not there yet," said John Simpson, of the advocacy group Consumer Watchdog, referring to the viability of self-driving car technology. "That's not to say a decade or two from now we won't be," but the accident shows "these cars are still not ready to go out on the road without a driver inside who's ready to intervene."

In Simpson's view, traditional automakers are being more reasonable in their measured development of self-driving cars, noting the gradual introduction of driver-assist technologies like blind side cameras and alerts that sound when drivers veer into an adjacent lane.

"As Google is wont to do, they're taking a moonshot approach to developing the technology," he said. "But how do robot drivers interact with interact with human drivers? With the average of cars on the road being 11 years, it's going to be quite a while before most of the cars on the road will be self-driving."

And while Simpson's concerns are understandable, humans aren't exactly perfect drivers, with some 26,000 people killed in car crashes in the first nine months of 2015, according to the National Highway Traffic Safety Administration. The question, then, isn't so much whether we're OK with self-driving cars crashing at all, it's whether we're OK with them crashing if they crash at a lower rate than human-driven cars.

"If we get to a point where in controlled trials the accident rates are significantly less than with human drivers then we have safer cars," said Froomkin. "And safer is not the same as no accidents."

In any event, all of this may be moot because Froomkin already has an exact plan to build totally safe self-driving cars.

"I can build you a robotic car that is 100 percent safe," he said, "it just never goes anywhere."