FYI.

This story is over 5 years old.

Tech

Killer Robots Could Be More Humane than Humans

Roboethicists are meeting at the UN this week to debate the fate of autonomous killing machines.
Image: International Governance of Autonomous Military Robots

Today at the United Nations in Geneva, dozens of nations from around the world and the top minds in robotics are trying to figure out what we should do about the specter of killer robots.

In diplomatic parlance, that's "lethal autonomous weapons systems"—intelligent machines that can decide on their own to take a person's life without any human intervention. As militaries in the US, China, Russia, and Israel creep ever closer to developing these killing machines, the UN is holding its first convention to debate if we should ban the technology outright before it's too late.

Advertisement

The downside of creating machines that can kill us without our say-so is well-trodden territory; sci-fi stories have done a sufficient job depicting that particular dystopian future. So have activists from the Campaign to Stop Killer Robots, Human Rights Watch, and the International Committee for Robot Arms Control, who are currently in Geneva lobbying for a preemptive ban.

But the other side of the debate is more provocative, and as the global community teeters at the tipping point of the fully-roboticized future of warfare, it's worth considering how killer robots could be a good thing.

Ronald Arkin, a roboethicist at the Georgia Institute of Technology, is busy arguing this point at the UN this week. His believes that LAWS could save the lives of innocent noncombatants.

For one, killer bots won't be hindered by trying not to die, and will have all kinds of superhero-esque capabilities we can program into machines. But the more salient point is that lethal robots could actually be more "humane" than humans in combat because of the distinctly human quality the mechanical warfighters lack: emotions.

Without judgment clouded by fear, rage, revenge, and the horrors of war that toy with the human psyche, an intelligent machine could avoid emotion-driven error, and limit the atrocities humans have committed in wartime over and over through history, Arkin argues.

"I believe that simply being human is the weakest point in the kill chain, i.e., our biology works against us,” Arkin wrote in a paper titled "Lethal Autonomous Systems and the Plight of the Non-combatant."

Advertisement

Arkin isn't a warmonger. The paper, which lays out much of the argument he’ll be making at the UN this week, is couched with qualifiers, a disclaimer that the whole basis of the argument operates on the assumption that war is, unfortunately, inevitable, and invokes a tone that seems to say 'I realize there's a distinct possibility this will be a disaster, but just hear me out.'

"As robots are already faster, stronger, and in certain cases … smarter than humans," he wrote, “is it really that difficult to believe they will be able to ultimately treat us more humanely in the battlefield than we do each other, given the persistent existence of atrocious behaviors by a significant subset of human warfighters?"

In other words, if we have to fight each other, maybe technology, if developed carefully, can help us kill just the enemy and not innocent people. Another report co-authored by Arkin suggests the future killer robots be governed by a series of rules: international treaties, and and a kind of ethical software to assure compliance with human rights law.

Of course, even with these safeguards, a lot of people disagree. The big gaping hole in Arkin's argument is, how can you guarantee these autonomous killing machines will actually do what we say? How do you know they won't reach a point of consciousness and intelligence where they decide to protect themselves even at the expense of human life? Why should we trust them to obey our laws?

Advertisement

Or for that matter, how can we guarantee autonomous systems’ targeted strikes will go off without a hitch? What if they malfunction, or get hacked? At the last count, US drone strikes, which were also pitched as a way to minimize innocent casualties, had killed nearly 1,000 innocent civilians.

"I think it is quite likely that these systems will behave in unpredictable ways, especially as they grow increasingly complex and are used in increasing complicated environments,” Peter Asaro, cofounder of the International Committee for Robot Arms Control emailed me from Geneva. “It is also increasingly likely that they will fail, breakdown and malfunction in less predictable ways as they become more complex."

There are a host of other concerns, such as the ethical implication of removing humans from the killing process, a move that makes it easy to kill a huge number of people without giving it a second thought. The UN meeting is part of the Convention on Certain Conventional Weapons—the group responsible for squashing blinding lasers, landmines, and other horrible weapons deemed inhumane.

No matter what the UN decides, the fact remains that lethal semi-autonomous robots are already fighting alongside humans, and militaries are working hard to develop full autonomy. Prototypes for next-gen UAVs like the X-47B and Taranis are designed to drop bombs, and can do so without the human supervision controlling Predator and Reaper drones.

For better or worse, autonomous lethal machines may be the inevitable future.