FYI.

This story is over 5 years old.

Tech

Elon Musk Calling Artificial Intelligence a 'Demon' Could Actually Hurt Research

Ultimately, Musk’s comments are hype, and hype is dangerous when it comes to science.

Elon Musk drags the future into the present. He's disrupted space with his scrappy rocket startup SpaceX and played a key role in making electric vehicles cool with Tesla Motors. Because of this, when Musk talks about the future, people listen. That's what makes his latest comments on artificial intelligence so concerning.

Musk has a growing track record of using trumped-up rhetoric to illustrate where he thinks artificial intelligence research is heading. Most recently, he described current artificial intelligence research as "summoning the demon," and called the malicious HAL 9000 of 2001: A Space Odyssey fame a "puppy dog" compared to the AIs of the future. Previously, he's explained his involvement in AI firm DeepMind as being driven by his desire to keep an eye on a possible Terminator situation developing. This kind of talk does more harm than good, especially when it comes from someone as widely idolised as Musk.

Advertisement

The phenomenon is known as "AI winter" — recurring periods when funding for research has dried up

Ultimately, Musk's comments are hype; and hype, even when negative, is toxic when it comes to research. As Gary Marcus noted in a particularly sharp New Yorker essay last year, cycles of intense public interest, rampant speculation, and the subsequent abandonment of research priorities have plagued artificial intelligence research for decades. The phenomenon is known as an "AI winter"—recurring periods when funding for AI research has dried up after researchers couldn't deliver on the promises that the media, and researchers themselves, made.

As described in Daniel Crevier's 1993 book outlining the history of AI research, perhaps the most infamous example of an AI winter occurred during the 1970s, when DARPA de-funded many of its projects aimed at developing intelligent machines after many of its initiatives failed to produce the results they expected.

Yann LeCun, the head of Facebook's AI lab, summed it up in a Google+ post back in 2013: "Hype is dangerous to AI. Hype killed AI four times in the last five decades. AI Hype must be stopped." What would happen to the field if we can't actually build a fully functional self-driving car within five years, as Musk has promised? Forget the Terminator. We have to be measured in how we talk about AI.

Image:  Steven Bowler/Wikimedia Commons

Musk's fiery comments serve to mislead people less familiar with AI research than (we must assume) he is. The fact is, our "smartest" AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over, not putting the finishing touches on SkyNet.

Advertisement

Multimillion dollar initiatives like the EU's Human Brain Project and the US's BRAIN Initiative are actively seeking to digitally map the human brain and use that knowledge for better computation, it's true. But they're starting small and their most tangible results are sure to be the technologies they develop on the way, like nanoscale biosensors that can scan the (organic) brain better. Of course, none of this means that the research being done isn't valuable—it absolutely is. Drumming up public fear isn't.

"When we hear these things about AI, it feels to me like what they're fearing is more like aliens, more like science fiction. When the aliens show up and they're three times smarter than we are," Selmer Bringsjord, director of the AI & Reasoning Laboratory at the Rensselaer Polytechnic Institute in New York, told me. "The scenarios to worry about are being increasingly studied by AI researchers, under sponsorship from governments. That's not the fear that he's pointing to. This is a deeper, and much more irrational fear."

What Musk and his like-minded contemporaries, who include Stephen Hawking, fear is the "existential risk" posed by artificial intelligence. The term, popularized by Nick Bostrom and his colleagues at the Future of Humanity Institute (Musk is a known fan of Bostrom's recent book, Superintelligence), refers to any risk that threatens to stymie the human species' development, or wipe us out altogether.

Advertisement

An existential risk could be war, rampant wealth inequality, natural disaster, or a malicious AI, according to Daniel Dewey, a scientist at the FHI. A salient fact that Musk doesn't seem to understand, or at least doesn't present, is that assessing the existential risk posed by artificial intelligence—unlike those others—is a largely speculative and philosophical undertaking that deals in plausibility, not certainty and assured trajectories, however valuable such thinking may prove to be in the far future. And, even then, only if what the FHI researchers call an "intelligence explosion," when machines become suddenly and unexpectedly intelligent, ever occurs.

"I think it's worth emphasizing how much we don't yet know about the AI risk situation," Dewey told me in an email. "The risk assessments we can do now for AI are much more like plausibility arguments than like rigorous calculations—for example, we can't say with certainty that an intelligence explosion is possible, though we can argue that it seems plausible, and we can't predict when an intelligence explosion could begin."

Musk, like Hawking, believes that regulatory committees should begin to oversee AI development, lest we do something "very foolish," like develop an AI that can kill us all. One must wonder what, exactly, an AI commission geared towards public policy would do at the moment. Certainly, regulation for the development of robots that display limited intelligence, as Ryan Calo, a University of Washington School of Law professor, proposed, could be beneficial. Self-driving cars, killer drones, and robots in the battlefield—those, we have a loads of. Strong AI anywhere near completion, on the other hand? That's another matter entirely.

Advertisement

"The dangers are acute, but that's a fact that can be distilled to scenarios in which, in narrower contexts, the AI makes the wrong decision," Bringsjord said. In other words, concerning oneself more practical and immediate risks like the recently publicised ethical quandary regarding who a self-driving car should hit when presented with the unavoidable choice is more realistic and responsible.

"I think the problem is one of more math and engineering than it is one of public policy," Bringsjord continued. "If we do set up a committee, we need to be focusing on the engineering."

According to Bringsjord, a focus on good engineering should come before public policy that tries to regulate the potential social ills of artificial intelligence. Not just because the field is still very early down that path, but also because the negative outcomes of computer programs can be mitigated by a rigorous design process.

One way to do this would be to revitalize the field of formal software verification, he says. Formal verification mathematically vets a program for faults and potential errors, reducing the risk of malfunction to near-zero. A casualty of past funding cuts, the field has all but disappeared, save for a crowdsourcing effort by DARPA.

"Instead of oversight committees and such, we might want to actually pay the people who can do the needed engineering to do it," Bringsjord told me. "It's like flooding in New Orleans. As is well-known, the engineers explained in detail how to protect New Orleans from floods—but the plan was deemed too expensive. Imprudent penny pinching is the real threat, not AI intrinsically."

Advertisement

Image: Steve Jurvetson/Wikimedia Commons

But the inherent potential dangers of extremely advanced AI is exactly what Musk and his contemporaries seem to be fixating on. The risk of technological annihilation is a dark sort of promise that both trumps up what AI will actually be able to do for the foreseeable future and obfuscates the issues that surround AI today.

Take, for example, technological unemployment. The development of AI and robots in the pocket of capital is poised to revolutionize our world and displace innumerable workers without any futuristic advancements. The potential for military robots without strong AI to autonomously kill people is becoming a contentious issue as drone technology widens its scope and becomes more advanced, and the organizations calling for an end to their development are being summarily ignored.

The truth is that we don't need a technological deus ex machina in the form of an AI as intelligent as a human for our world to be irrevocably changed by robots and software, for better or for worse. This doesn't mean that we shouldn't consider the futuristic implications of advanced AI, but we should do it responsibly, especially when given access to public platforms.

Musk's comments fuel the media fire of hype and unexamined fear that serves to oversell a technology that is still in its infancy, however impressive its near-daily advancements seem. Self-driving cars, artificial national security screeners, robotic coworkers… The possibilities are terrifying, but they are also wonderful. That's the future for you, and it never helps to enter it in hysterics.