FYI.

This story is over 5 years old.

Tech

Meet the Man Behind the Push to Ban Killer Robots

"Lethal autonomous robots," as they're known, are on their way whether we're ready for them or not.
Image via Wikipedia

Depending on who you ask, armed robots that can discern by themselves when and how to stage attacks, without guidance from humans, present either an unprecedented danger to humanity or its greatest mechanism of defense. But both sides agree that such "lethal autonomous robots," as they're known, are on their way whether we're ready for them or not.

The prospect of free-thinking war machines waging the ultimate battle against the human race has been bouncing around most of our minds since Arnold promised us that he'd be back in the 1980s. That was back when the idea of a walking, talking robot soldier was, like Schwarzenegger himself, more caricature than distressing.

Advertisement

Less then three decades and more than a few drone strikes later, a new UN report is calling for "national moratoria" on developing killer robots in every country on the globe. It's a good first step, considering no one really knows what an accurate portrait of drone development looks like around the world. We think that between 60 and 70 countries use surveillance drones. The US is the largest drone manufacturer. China plans to start selling them on the global market. We're hoping for the best, but we're not planning for the worst.

To bring clearer into focus the backdrop of killer robots and the threat they pose, I talked to Peter Asaro, co-founder of the International Committee for Robot Arms Control. The author of the UN report, Christof Heyns, drew heavily on Asaro's research and his organization's agenda in crafting his conclusions. In creating machines that are better than humans, Asaro says, we're enabling a new kind of danger that we aren't prepared to handle.

MOTHERBOARD: Why is it important that we acknowledge this report, and these issues, right now?

Asaro: I think we face these issues immediately today. It's the best time for regulation. The sooner we talk about it, the better. Once these systems become developed and nations become dependent on them it will be very hard to move backward.

There are a number of systems outlined in the human rights watch report—they call them "precursor systems"—which are autonomous systems that are in use already. TheSamsung Techwin system that patrols the demilitarized zone between North and South Korea has fully autonomous capability that is switched off. But of course you could switch it on. The technology is there at that level. The developments we see with X-47B orTaranis, which are combat drones capable of autonomous flight and weapons. The question is, who is controlling the weapons? Those technologies are in testing and development. We're really only a few years away from a prototype and large-scale development.

Advertisement

Given the picture you just painted, what perspective would you like world leaders to acknowledge as the world comes to terms with autonomous killer robots?

We think the national moratoria are good steps toward an international ban. We want a worldwide ban so that there will always be a human meaningfully supervising targeting and kill decisions. What we really want is a discussion about regulation. Ultimately, we need a treaty that establishes a norm that it is unacceptable to delegate the authority to kill to a machine.

What aspect of autonomous weapons is going to present the greatest challenge, regulation-wise?

Primarily, we're concerned with deployment. But research and development are much more difficult to regulate and ban, especially in terms of dual-use technologies. For example, the X-47B performs autonomous launch and landing on aircraft carriers. That could be a stepping stone to an autonomous fighter, but it could also be used to launch actual human pilots. It's hard to say that automated take-off and landing, themselves, are bad or should be prohibited because the technology can be used in different ways.

We want a worldwide ban so that there will always be a human meaningfully supervising targeting and kill decisions.

You think about navigation or being able to identify humans in the crowd—how will those be applied? We want to ban the couplings of those within a weapons system, the act of connecting them and authorizing a system to use lethal force without further human supervision. Of course, there is a challenge in how we define "meaningful human supervision" such that it can be implemented in a treaty, but this is precisely why expert discussions and international negotiations are needed.

Advertisement

Where exactly do you draw the line between what is and is not acceptable within a system that will continue to be outfitted with autonomous features?

There's a very clear line to be drawn when it comes to using lethal force. It's fine to automate transportation, surveillance and other military tasks. But when you talk about using or releasing a weapon, you really want a human who has situational awareness and who is aware of context and who is able to discern that the target is valid before the weapon is used. Otherwise you're giving free reign for automation to decide for itself what is a target.

The problem is not just whether it is precise or accurate. Maybe the technology can progress in terms of its accuracy and precision, but the deeper question is, can it assess the value of a target? Is it really a threat? Is the use of lethal force really necessary? How has its military value changed in light of the unfolding battle? Is the value of that military target high enough that we're willing to risk civilian lives? How many? It requires a sophisticated strategic understanding of what's going on—risk assessments and other judgments that computers aren't equipped to make right now. It's not easy to write an algorithm to do that, and it may be impossible.

Also, there's a lack of accountability and responsibility for whoever told the robot to go do something. The reality is that if the robot does something really bad, in criminal court you can't hold it accountable. In reality, you lose the deterrent effect and any identifiable accountability and you have killing going on that's unaccountable. That's a huge problem.

Advertisement

What implications does this have for conventional notions of human accountability? What sort of moral and philosophical quandaries do killer robots present?

There's a question of whether they can even conform to laws. And for us, is having no accountability in and of itself acceptable? That's a moral question—whether it's permissible at all to delegate the authority to a machine to kill a human being.

Generally, humans killing humans is acceptable when it's self-defense. But in that case it's a human estimation, and it's always a judgment call.

It's difficult when even uniformed armed combatants engaged in warfare are not always legitimate enemy targets. There's a moral quality: If it's not necessary to kill your enemy in a given situation then it's morally wrong, even if it's legal. Is it ever legitimate for a machine to decide who lives and dies? I think the answer is no, no matter how legitimate a computer is.

Are there any individuals or interests who are pushing back against your proposed regulations?

There's a reaction that they might be some AI system down the line that actually make targeting and kill decisions better than humans. And if there were, then those systems might cause fewer civilian casualties in war, and if so then we might be morally obligated to use them. They argue that if we ban development going forward that would effectively prevent these potentially better machines from ever existing.

But similar arguments were made for chemical and biological weapons, that these could be more human than bullets and bombs. After getting a few glimpses of chemical warfare, the world community agrees that it is repugnant, and believe that those weapons are wrong in and of themselves.