FYI.

This story is over 5 years old.

Tech

Legal Analysis Finds Judges Have No Idea What Robots Are

There’s a surprisingly long history of court cases involving robots—that’s not really a good thing.

The last judge-ordered beheading in the United States occurred in 1981, US District Judge Charles Brieant announced that the "dismantling of Walter Ego's head and torso will be required."

This wasn't some act of government-endorsed brutality, however. Walter Ego was a robot entertainer that was suspiciously similar to a competing robot named Rodney. Walter Ego's destruction made news briefs around the country: "Robot Beheaded" and "Walter Ego Loses His Head," but his plight has otherwise been lost in the annals of history.

Advertisement

We can't be sure what Brieant was thinking when he ordered poor Walter killed (Brieant died in 2008), but a new legal analysis by Ryan Calo, an assistant professor at the University of Washington School of Law, has found that there have been a whole litany of cases involving robots. Upsettingly, many decisions involving robots suggest American judges have a fundamental lack of understanding about what robots are and how they function.

"Judges seem to aggressively understand a robot to be something without discretion, a machine that is programmed and does exactly what it's told," Calo told me. "Courts don't have their minds around the differences between people and robots."

Call details this disconnect time and time again in the paper, "Robots in American Law." He and his assistant researchers combed through thousands of cases where the word "robot" (or similar words like artificial intelligence) were mentioned and selected nine cases that demonstrated particularly confounding or interesting decisions by the judge.

The question of what a robot is and what that means has come up more often than you might think. In Comptroller of the Treasury v. Family Entertainment Centers, a special appeals court had to decide whether Chuck E. Cheese animatronic robots were considered "performers." This distinction mattered because at the time, Maryland taxed food differently "where there is furnished a performance."

Advertisement

Judges have always determined robots to have no discretion. If it ever was accurate, it isn't now

The court decided that "a pre-programmed robot can perform a menial task but, because a pre-programmed robot has no 'skill' and therefore leaves no room for spontaneous human flaw in an exhibition, it cannot 'perform' a piece of music … just as a wind-up toy does not perform for purposes of [the law] neither does a pre-programmed mechanical robot."

For the purposes of that individual case, that seems like a fairly innocuous definition. But Calo notes that even at the time of this case, robots had begun performing basic autonomous tasks. "If ever there were a line between human and robot spontaneity or skill, it is rapidly disappearing," he wrote.

This distinction matters, Calo, who has previously proposed a "Federal Robotics Commission," argues. If rudimentary robots have already show up thousands of times in court proceedings, things like self-driving cars, drones, artificially intelligent bots, and more traditional robots are going to consistently become a focus of court cases.

"Emergent [robotic] behavior has us headed to a place where courts will have to find that, while there is a victim, the law isn't able to find a perpetrator"

"As artificial intelligence becomes transformative tech, it will inevitably occasion legal disputes," Calo told me. "I wondered, are we writing on a clean slate? Knowing what case law is out there gives us the tools we need to know where judges are struggling with the law."

Advertisement

And they are struggling. We have no idea what will happen when the first autonomous car inevitably causes a fatal accident. Will it be Google who is at fault? The human passenger of the self-driving car? The individual coder who programmed a bug into the car's software?

This is a hypothetical that has been brought up time and time again. When it happens, perhaps the courts will look back at the 1949 Brouse v United States decision that ruled in favor of the estates of a man and woman who were killed by a US Army fighter plane and a small private plane. The Army plane was being flown by autopilot, but the judge in the case ruled that the human pilot in the fighter plane had an obligation "to keep a proper and constant lookout."

Will that legal precedent matter in a hypothetical case involving a Google self-driving car?

Calo says the whole debate comes back to this idea of discretion, which has nothing to do with the idea of the singularity or self-awareness. Time and time again, judges have looked at robots as entities that do only what they are programmed to do. But new "robots" avoid obstacles, make decisions, and generally do unpredictable things.

"Judges have always determined robots to have no discretion. If it ever was accurate, it isn't now. They don't make decisions now like you and I do, but they do have discretion, they have the ability to do things that are surprising and aren't intended by creator of robot," Calo said.

So people may die, "crimes" may be committed, but it's possible that the robot's creator isn't negligent and that he or she should not be held accountable. And even if you decapitate a robot, well, that's probably not an answer, either.

"Emergent [robotic] behavior has us headed to a place where courts will have to find that, while there is a victim, the law isn't able to find a perpetrator," he said. "But that assumes judges will understand how they work."