FYI.

This story is over 5 years old.

Tech

Thousands of Scientists Say We Need a Global Ban on Autonomous Weapons

"Warfare is a bloody, horrific, painful thing and it should always be so."
Image: kuruneko/Shutterstock

Autonomous weapons are the future's Kalashnikovs, according to over 1,000 experts in artificial intelligence. Cheap, lethal, and guaranteed to end up in the wrong hands at some point, AI weapons are poised to be at the centre of the next global arms race.

That's according to an open letter from the Future of Life Institute, an organisation dedicated to mitigating existential risks. It's endorsed by thousands, including such household names (and outspoken prophets of AI doom) as Stephen Hawking and Elon Musk.

Advertisement

"People have argued about autonomous weapons for years," said Max Tegmark, an MIT professor and one of the FLI's founders. "This is the AI experts who are building the technology who are speaking up and saying they don't want anything to do with this."

He likened the situation to physicists, biologists, and chemists speaking out against research in their fields being used to develop nuclear, biological, and chemical weapons.

Toby Walsh, a professor of AI at the University of South Wales in Australia who will present the letter at the International Joint Conference on Artificial Intelligence in Buenos Aires on Tuesday, said it was time that AI researchers made their stance clear.

"There have been some negotiations at the United Nations in Geneva looking towards some sort of ban on autonomous weapons," he said. "In conversation with those people, it became to clear to us that it would help the discussions and diplomatic negotiations if they saw that there was general support from scientists and not just humanitarian organisations."

"Warfare is a bloody, horrific, painful thing and it should always be so."

The letter defines autonomous weapons as those that "select and engage targets without human intervention," citing as an example "armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria." It doesn't include military drones in current use, as a human still has to remotely "pull the trigger."

Advertisement

Walsh and his many co-signatories are urging authorities to stop an "arms race" for AI-weapons before it really started. "It has been suggested that this potentially will be as big a transformation as the invention of gunpowder and the invention of nuclear weapons to the way we fight war," Walsh said.

Tegmark explained that autonomous weapons could be a gamechanger. "It opens up entirely new possibilities for things that you can do—where you can go into battle or do a terrorist attack with zero risk to yourself, and you can also do it anonymously, because if some drones show up and start killing people somewhere you have no idea who sent them," he said.

While some argue that autonomous weapons could help reduce casualties by reducing the need for human soldiers in combat, the researchers warn that this could also "lower the threshold" for starting conflict.

"One of the main factors that limits wars today is that people have skin in the game," Tegmark told me. "Politicians don't want to see body bags coming home, and even a lot of terrorists don't want to get killed."

The letter also warns that autonomous weapons would be particularly good at "tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group." Tegmark painted a dystopian picture of politicians being assassinated by anonymous drones, or dictators using autonomous weapons to wipe out a minority when even his troops don't have the heart to.

Advertisement

"On a personal level, I think that anything that makes warfare seem cleaner is probably a bad idea," added Walsh. "Warfare is a bloody, horrific, painful thing and it should always be so."

"Boko Haram is way too incompetent to figure out how to build it themselves; so is ISIS."

Another major worry is that the tech could reach the wrong hands, both because the price is expected to drop (getting your hands on a nuclear missile isn't as easy), and because, like most military tech, there's a risk it could be picked up by opponents on the battlefield or sold on the black market.

"If any big military power goes ahead with this, then they're going to become the Kalashnikov of tomorrow, because everybody's going to have them," said Tegmark, noting that trying to stop autonomous weapons after they're in use would be as difficult as trying to stop the trade in guns, so action needs to be taken to stop them coming into play.

"Boko Haram is way too incompetent to figure out how to build it themselves; so is ISIS," he said. "So there is time to stop it if there's a political will."

Actually banning the technology behind these kind of weapons would obviously be futile; Walsh noted that the code going into self-driving cars could be tweaked to apply to autonomously guided weapons. But the idea is that a ban specifically on using it to build autonomous weapons would prevent arms companies from putting them out there to be misused.

Advertisement

The researchers compare this strategy to the ban on blinding lasers. The UN'sProtocol on Blinding Laser Weapons came into force in 1998 and prohibits the use of laser weapons that are primarily designed to function by causing permanent blindness out of humanitarian concern.

Naturally, lasers capable of doing so exist—and have legitimate uses—but as a result of the protocol, arms companies can't sell weapons based on that tech. The UN met to discuss what they call LAWS (Lethal Autonomous Weapons Systems) in April this year. The UK opposes an international ban.

The Future of Life Institute has previously warned of the risk artificial intelligence could pose to our own existence, setting out guidelines for how AI research should progress and emphasising the need for AI to stay under our control. Autonomous weapons, said Tegmark, were a "very flagrant example of how technology might be used in really really stupid ways that are really really bad for humanity."

The AI community says there is no time to lose to prevent that imagined future becoming reality.

"We're looking in less than a decade," said Walsh. "Now is the time to put a stop to this before it's too late."