FYI.

This story is over 5 years old.

Tech

The First ‘Robotic' Car Crash Happened in 1947

That time a court legally ruled a girl a 'robot.'
A 1947 Willys Jeep (the case does not note the specific model). Image: Travis/Flickr

The first car crash caused by an automated car, as far as the courts are concerned, did not happen in February when a Google car crashed into a California bus at low speed. In 1950, a Missouri court ruled that an underage girl who was behind the wheel of a jeep when it crashed was a "robot" who wasn't responsible for the crash.

The story goes something like this: In 1947, Grant Frye bought a new jeep (for $1,443.37, if you care) and let his son John drive it around one day. Frye told his son "to let no one drive that car under any circumstances." John, who was 17 at the time, immediately picked up a girl named Kathryn—who had never driven a car before—from her job at a movie theater, hit up a barbecue stand and, of course, let her drive. The date did not go very well.

Advertisement

"[John] told her he would teach her to drive, and at his request, she got under the wheel and he sat in the front seat beside her. He showed her how to start the car, how to use the clutch, how to shift the gear and where the brakes were," a court record from 1950 reads. According to the court, Kathryn followed John's directions without question. At one point halfway into a high-speed right turn, John reversed course and told her to turn left. Kathryn—"robotically," the court said—jerked the wheel left, flipping the jeep into a ditch.

"As far as John was concerned, she controlled the car the same as if she had been a robot or an automaton"

The case probably should have been lost to rejected 1950s black-and-white-sitcom-script-land, but Grant Frye was apparently a bit of a jerk. He not only sued Kathryn for the $200 worth of damage she did to the car, but appealed the decision after a lower Missouri state court ruled that John, not Kathryn, was responsible.

Frye's litigiousness is why we have the absurd 1950 Frye v. Baskin decision made by the Springfield Court of Appeals.

"As far as John was concerned, she controlled the car the same as if she had been a robot or an automaton," the court wrote. "When John said 'turn,' she turned, mechanically, she was the instrumentality by which John drove the car."

This case is a curio of weird American history, and it's a frustrating look at gender roles at the time. But it's not entirely frivolous to look back at the case now that a Google driverless car has been deemed at fault in a crash. The Frye case was recently resurfaced by Ryan Calo, a law professor at the University of Washington, in his paper Robots in American Law, which argues that judges have had a seriously flawed view of what a "robot" is.

Advertisement

The Frye case is instructive, he says, because judges regularly look at robots as entities that strictly follow orders. Historically, judges have defined robots as something that is programmed and is not capable of making its own decisions. In Frye, that definition was extended to a girl.

"The idea is that a robot is what a person or entity becomes when completely controlled by another. Such a person or entity is not capable of fault or knowledge, leaving the person behind the machine—the programmer—at fault," Calo wrote. "While a robot, no one sees, hears, or does evil."

"The operation of the car was under John's control as much as if he had merely had a longer arm by which he would control it from the right side"

That definition may have made sense in 1950, but it certainly doesn't now. Even the most basic robots perform tasks that are unintended by their creators; surely Google didn't intend to have its car crash into a bus, but the car still made the decision to avoid a set of sandbags, which led to the crash. Does that mean Google is at fault? Or is it the human passenger who was sitting in what is traditionally the driver's seat? That'll be for courts and lawmakers to ultimately decide.

And in that one sense, the decision in Frye is instructive, though not technically precedent-setting. The court ruled that "the operation of the car was under John's control as much as if he had merely had a longer arm by which he would control it from the right side."

Does that mean that if a driver can take control from an automated system that he or she must in order to not be at fault in the event of a crash? That's the way courts have leaned before, Calo says, and it's likely the reason why Google is focusing on making driverless cars without steering wheels altogether.

"As long as a human is in the loop somewhere, does that mean they can bear responsibility?" Calo said. "That came up in a lot of the cases I looked at—if there's the possibility of a person intervening, they tend to be looked at as the one at fault."