Elite Scientists Have Told the Pentagon That AI Won't Threaten Humanity
DARPA's 'Spot' quadruped prototype robot. Image: Sgt. Eric Keenan/U.S. Dept. of Defense

FYI.

This story is over 5 years old.

Tech

Elite Scientists Have Told the Pentagon That AI Won't Threaten Humanity

JASON advisory group says Elon Musk’s singularity warnings are unfounded, but a focus on AI for the Dept. of Defense is integral.

A new report authored by a group of independent US scientists advising the US Dept. of Defense (DoD) on artificial intelligence (AI) claims that perceived existential threats to humanity posed by the technology, such as drones seen by the public as killer robots, are at best "uninformed".

Still, the scientists acknowledge that AI will be integral to most future DoD systems and platforms, but AI that could act like a human "is at most a small part of AI's relevance to the DoD mission". Instead, a key application area of AI for the DoD is in augmenting human performance.

Advertisement

Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, first reported by Steven Aftergood at the Federation of American Scientists, has been researched and written by scientists belonging to JASON, the historically secretive organization that counsels the US government on scientific matters.

Outlining the potential use cases of AI for the DoD, the JASON scientists make sure to point out that the growing public suspicion of AI is "not always based on fact", especially when it comes to military technologies. Highlighting SpaceX boss Elon Musk's opinion that AI "is our biggest existential threat" as an example of this, the report argues that these purported threats "do not align with the most rapidly advancing current research directions of AI as a field, but rather spring from dire predictions about one small area of research within AI, Artificial General Intelligence (AGI)".

AGI, as the report describes, is the pursuit of developing machines that are capable of long-term decision making and intent, i.e. thinking and acting like a real human. "On account of this specific goal, AGI has high visibility, disproportionate to its size or present level of success," the researchers say.

Motherboard reached out to the MITRE Corporation, the non-profit organisation JASON's reports are run through, as well as Richard Potember, a scientist listed on the report, but neither responded to emails before this article was published. A spokesperson for the Defense Department told Motherboard in an email, "DoD relies on the technical insights provided by the JASONs to complement DoD's internal assessments as we set our strategic direction. All reports and recommendations are read and carefully considered in this context as we make investment decisions for research initiatives and future programs of record."

Advertisement

In an email on Thursday, Aftergood told Motherboard, "JASON reports are purely advisory. They do not set policy or determine DoD choices. On the other hand, they are highly valued, very informative and often influential. The reports are prepared only because DoD asks for them and is prepared to pay for them."

Aftergood said that JASON reports act as a "reality check" for Pentagon officials, helping them decide what's real and what's in the realm of possibility.

In recent years, the purported malicious intent of artificial intelligence is an idea that has flourished in the media, compounded by much more realistic fears of entire employment sectors being replaced by robots. This issue is not helped by the conflation of robotics and AI by some media outlets and even politicians, as illustrated last week by a debate among members of the European Parliament on whether robots should attain legal status as persons.

Highly publicized recent AI victories against humans, such as Google's AlphaGo win, don't illustrate any breakthrough in general machine cognition, the report argues. Instead, these wins rely on Deep Learning processes on Deep Neural Networks (DNNs)—processes that can be trained to generate an appropriate output in response to an input. Think a dog sitting to your command, rather than a dog knowing to sit itself.

"The two main approaches to AI are nothing like how humans must live and learn,"

Advertisement

Andrew Owen Martin, senior technical analyst at the Tungsten Network, a collaborative team consisting of math, AI, and computer science experts, and secretary of The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB), agrees with the JASON report, arguing that the public's and other high-profile technologists' fear of existential threats is overblown.

"Any part of human experience that's at all interesting is too poorly defined to be described in either of the two main methods AI researchers have," he tells Motherboard. "The implication here is that the two main approaches to AI are nothing like how humans must live and learn, and hence there's no reason to assume they will ever achieve what human learning can."

Nevertheless, AGI is recognised by the JASON scientists as being somewhat pertinent in the DoD's future, but only if it were to make substantial progress. "That AI and—if it were to advance significantly—AGI are of importance to DoD is so self-evident that it needs little elucidation here," reads the report. "Weapons systems and platforms with varying degrees of autonomy exist today in all domains of modern warfare, including air, sea (surface and underwater), and ground."

Northrop Grumman's X-47B uncrewed bomber is given as one example, and DARPA's ACTUV submarine hunter is given as another. Systems like these could no doubt be improved by enhanced artificial intelligence, but the scientists note that while these systems have some degrees of autonomy, they are in "no sense a step…towards 'autonomy' in the sense of AGI". Instead, AI is used to augment human operators, such as flying to pre-determined locations without the need for a human piloting the systems.

Advertisement

Yet, while not categorically autonomous, AI-augmented weapon systems are obviously are still pain points for opponents of their use in military scenarios. Max Tegmark, cosmologist and co-founder of the Future of Life Institute, a think tank established to support research into safeguarding the future of human life, tells Motherboard that he agrees with JASON's view that existential threats are unlikely in the near term. However, Tegmark believes the imminent issue of autonomous weapons is "crucial".

"All responsible nations will be better off if an international treaty can prevent an arms race in lethal autonomous weapons, which would ultimately proliferate and empower terrorists and other unscrupulous non-state actors," he says.

So human-like autonomy is still a long way off, if not impossible, and the goalposts keeps moving too, according to JASON. "The boundary between existing AI and hoped-for AGI keeps being shifted by AI successes, and will continue to be," say the scientists. Even military applications for technologies such as self-driving tanks may be at least a decade off. Discussing the progress of self-driving cars by civilian companies such as Google, the JASON scientists conclude, "going down this path will require at least a decade of challenging work. The work on self-driving cars has consumed substantial resources. After millions of hours of on-road experiments and training, performance is only now becoming acceptable in benign environments. Acceptability here refers to civilian standards of safety and trust; for military use the standards might be somewhat laxer, but the performance requirements would likely be tougher."

Advertisement

Read more: Better Facial Recognition Tech Could Lead to Robot Hitmen

This article must be concluded, however, with the looming caveat that the entire JASON report is based on upon unclassified research. "The study looks at AI research at the '6.1' level (that is, unclassified basic research)," say the scientists. "We were not briefed on any DoD developmental efforts or programs of record. All briefings to JASON were unclassified, and were largely from the academic community."

Could America's military be working on top-secret artificial general intelligence programs years ahead of those known about in the public sphere? Probably not, ponders Martin. "AGI isn't around the corner, it's not even possible. I don't mean that it's 'too difficult' like 'man will never fly' or 'man will never land on the moon', I'm saying it's hopelessly misguided like 'man will never dig a tunnel to the moon'."

Get six of our favorite Motherboard stories every day by signing up for our newsletter.

Update 01/23/17: This article has been updated to include a statement from the Department of Defense.