FYI.

This story is over 5 years old.

Tech

Before Anyone Bans Killer Robots, We Have to Define What They Are

Activists want to ban “killer robots,” but first we need to define them.
​Image: ​US Navy

In April, world leaders will meet in Geneva discuss the regulation of autonomous weapons at the Convention on Certain Conventional Weapons (CCW). But before they make laws, they first have to agree on what an autonomous weapon even is.

"We still really haven't come to a universal understanding of what an autonomous weapon is," Paul Scharre, Director of the 20YY Warfare Initiative at the Center for a New American Security (CNAS), told me. "We have a group of NGOs that have leapt to conclusions––they want a ban on 'killer robots,' but nobody is sure what that is."

Advertisement

Scharre and Michael C. Horowitz co-authored a new report for CNAS that seeks to clear up some questions surrounding the definition of these so-called "killer robots."

The report, which Scharre said was published with the upcoming CCW in mind, breaks down the concepts behind these machines, including the idea of autonomy itself, what an autonomous weapon is, and how autonomy is already being used militarily today.

"There is a wide gap between a Roomba and a Terminator."

"In its simplest form, autonomy is the ability of a machine to perform a task without human input," the report says. "Autonomous systems are not limited to robots or uninhabited vehicles. In fact, autonomous, or automated, functions are included on equipment that people use every day."

Although we rely on many benign autonomous technologies every day, including anti-lock brakes and airbags in cars, the degrees of complexity behind the technology make a huge difference. As the report says, "there is a wide gap between a Roomba and a Terminator."

"The threshold of when a weapon crosses into the territory of an autonomous weapon is worthy of consideration," Scharre said. "The issue isn't autonomy at large. You might get in a self-driving car that is avoiding obstacles by itself, but presumably the person is still choosing the destination."

At least 30 countries now have semiautonomous defense systems, meaning the machine performs a function based on commands a person inputs. The technical term for this is having a "human in the loop." "Fully autonomous" or "human out of the loop" machines can perform tasks without human intervention, and that is what concerns some activists.

Advertisement

Noel Sharkey (middle) with Steve Goose of Human Rights Watch and Jody Williams of the Nobel Women's Initiative. Image: ​Campaign to Stop Killer Robots

Noel Sharkey, a professor of artificial intelligence and robotics at Sheffield University who campaigns for control of armed robots, said his main concern is with the idea of an "autonomous kill function."

"We would all agree that an autonomous weapon is one that can select and attack targets without human intervention," he wrote in an email. "My biggest concern is about harm to civilians."

However, there are very few existing fully autonomous machines, which makes the technology difficult to regulate.

"It's important to understand the technology we are talking about, but it's challenging because the technology doesn't yet exist," Scharre said. "For example, with chemical weapons, they were already used, and we retrospectively tried to regulate them. With autonomous weapons, we arent sure yet what the technology is--there's nothing you can point to."

Similarly, Mary Wareham, director of the Arms Division of Human Rights Watch and coordinator of its "Stop Killer Robots" campaign, said it is difficult to define exactly what technologies the group is against because governments haven't yet laid out what technologies are on the table.

"We've really got to hear from the governments as to how they view this, as they are the decision-makers––perhaps that will start to happen at the meeting in April," she said. "Governments are going to have to draw a line somewhere at some point and figure out what falls on the wrong side of that line."

NGOs and world leaders will continue to parse out the technicalities, but meanwhile, the dystopian ideas of intelligent killing machines are hard to keep out of larger discourse.

"I think there is this sort of paradigm we have inherited, in part from science fiction, of technological creations that turn against us," Scharre said. "The better way to think of it is in terms of what tasks are being done by a machine and what tasks by a person."

Although it's difficult to regulate technologies that don't exist yet, this paper offers a framework to start the conversation. With autonomy already integrated into military technology, autonomous killing machines seem inevitable. In April, perhaps we'll see what world leaders will do about the legal, moral, ethical, and policy issues that come with them.

Top image: A Navy JSOW C-1 networked munition in 2011 testing. The munition has an autonomous targeting feature, but does that make it an autonomous weapon?