Image: Steve Jurvetson/Flickr
One of the great utopian visions for the future of driverless cars is one where a car drives you to work, drops you off, autonomously goes and picks someone else up, drops them off, and so forth. Doing so would vastly cut down on the number of cars on the road, save them from “wasted” time parking, and would turn them into a shared resource. But before that happens, manufacturers are going to have to guarantee the safety of the vehicles (or come close to it)—unfortunately, that appears impossible.
The only way to truly create a driverless car that never crashes would be to first turn driverless cars into devices that can perfectly predict the future, according to a recent look at the realities of physics and space and time and that sort of thing by French computer scientist Thierry Fraichard. In a new paper, Fraichard spells out the mathematics and physics behind “what if some idiot jumps out in front of the Google car at the last second?”
The results are fairly grim and absolute. Fraichard explains that there is such a thing created as an “inevitable collision state” where a car is going to crash no matter what happens—“imagine a car traveling very fast toward and a few meters away from a wall. Although the car is not in collision at the present time, it will crash regardless of any efforts to stop or steer,” he writes.
This graph explains the "inevitable collision state" in which it is impossible to avoid a crash. Image: Thierry Fraichard
Fraichard says there are four rules that make up automated car safety:
“1. Decision time is upper-bounded.
2. Reasoning about the future is required.
3. Time horizon is lower-bounded.
4. Globally considering the obstacles is required.
These rules may appear very abstract and general but, the important point is that if any one of these rules is violated then collisions are likely to happen.”
To put it simply, “in a dynamic environment, one has a limited time only to make a motion decision. One has to globally reason about the future evolution of the environment and do so with an appropriate time horizon.”
So, basically, in order to have absolute safety, a car has to literally know everything that is about to happen and has to have enough time to be able to adjust for the movement of everyone and everything else. If it doesn't, there's eventually going to be a situation in which there's no time to react—even for a computer.
“If you could make sure the car won’t break or your [car’s] decisions are 100 percent accurate, even if you have the perfect car that works perfectly, in the real world there are always unknown moving obstacles,” Fraichard told me. “Even if you’re some kind of god, it’s impossible. It’s always possible to find situations where a collision will happen.”
The question then becomes, what do we do with this information? Just because we can’t guarantee that a driverless car will never crash doesn’t mean that driverless cars won’t be much safer than human drivers—that much is obvious. But much of the promise of autonomous cars lies in the ability to completely take a human out of the equation so they can do that whole robot taxi thing on their own. What happens when one crashes into a pedestrian or a human-driven car? Normally, we’d exchange insurance information then get off the road—are people going to be cool with a robot profusely apologizing for running over some kid who jumped into the middle of the road?
There are ways around this—you could imagine having some sort of human on call in big cities who could go out to the scene of an accident or something like that, but it’s just one more question that lawmakers are going to have to answer.
This is all assuming the cars are essentially perfect, which Fraichard says we are actually quickly approaching. Google’s car has never crashed while in autonomous mode and, while there will definitely be improvements in technology, he says that, under normal conditions, driverless car technology is getting pretty close to perfect. Small improvements could be made if companies like Google made the data from their cars available for the public to analyze. Of course, much of that is proprietary, so don’t hold your breath.
“From the technology point of view, we have everything here, available now,” he said. “The technology is here now but I think what we need is more robot cars driving around and more information on what is taking place—what is happening, when they crash.”
Part of the reason we don’t have that (beyond Google wanting to keep it a secret) is because, to its credit, Google’s car has never crashed. But someday, it’s going to—physics says it’s inevitable.