FYI.

This story is over 5 years old.

Tech

The Icelandic Institute for Intelligent Machines Now Has a Unique Ethics Policy

The lab wants to prevent Skynet, basically, and pressure other researchers to do the same.
Image: Icelandic Institute for Intelligent Machines

The Icelandic Institute for Intelligent Machines has a first-of-its-kind ethics policy: it is refusing to work on artificial intelligence, automation, or machine learning for military purposes, instead pursuing its research from a purely pacifist standpoint.

"(T)he increased possibility — and in many cases clearly documented efforts — of governments wielding advanced technologies to spy on their law-abiding citizens, in numerous ways, and sidestep long-accepted public policy intended to protect private lives from public exposure has gradually become too acceptable," the new policy reads. "In the coming years and decades artificial intelligence (AI) technologies — and powerful automation in general — has the potential to make these matters significantly worse."

Advertisement

Iceland has no standing army, making it a good fit for this kind of pacifist research, reflecting that it's modelled on the post-war research in Japan which banned military research in technology (a ban recently lifted.)

As such, the new ethics policy bans research into any AI programs that may cause physical or psychic anguish, violate privacy or human rights, be used for any illegal purpose (including warrantless wiretaps), or to be used in an act of war.

Much of the policy seems to implicate larger nations like the United States, specifically calling out the NSA wiretaps as a "pervasive breach of the U.S. constitution."

Of course, there are other uses for AI, machine learning, and automation in military and intelligence communities. This could include object recognition for drone strikes, vocal pattern recognition in wiretaps, and other kinds of tracking, as well as smarter, scarier drones and weapons in general. Think: creating a killing machine with a specific target or targets in mind. And as seen in recent years, DARPA has taken a heavy interest in robotics, making military robots an inevitability.

IIIM's policy goes a step further. Not only will the lab not perform research in those arenas, but it also won't accept money from those actors, including government money tied in any way to the military or private clients whose work primarily focuses on military or weapons research and development. This includes any group whose research budget is more than 15 percent from military sources or for military work.

Whether or not others will follow the lead remains to be seen, as IIIM says it's the first lab to develop such a policy. But hopefully it will help steer others towards non-invasive AI systems that benefit the greater good rather than aid in violence, surveillance, or other coercive activities.