FYI.

This story is over 5 years old.

Tech

Superintelligent AI Could Wipe Out Humanity, If We're Not Ready for It

As Stephen Hawking said, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last."
Image: Shutterstock

Impending technological change tends to elicit a Janus-faced reaction in people: part awe, part creeping sense of anxiety and terror. During the Industrial Revolution, Henry Ford called it the “the terror of the machine.” Today, it's the looming advancements in artificial intelligence that promise to create programs with superhuman intelligence—the infamous singularity—that are starting to weigh on the public consciousness, as blockbuster ‘netsploitation flick Transcendence illustrates.

There’s a danger that sci-fi pulp like Transcendence is watering down the real risks of artificial intelligence in public discourse. But these threats are being taken very seriously by researchers who are studying the existential threat of AI on the human race.

Advertisement

Dismissing hyper-intelligent machines as mere science fiction “would be a mistake, and potentially our worst mistake ever,” wrote Stephen Hawking in an article he recently penned for the Huffington Post alongside some other leading physicists. They claim we need to seriously research the existential risks and ethical concerns surrounding the future of AI, because the survival of the human species could depend on it.

That's right, the very existence of the human race—that’s how high the stakes are here. I talked to Daniel Dewey, who studies the ethics of machine super-intelligence at Oxford University’s Future of Humanity Institute (FHI), to get some perspective on what these specifics risks could be.

At issue is the development of what Dewey and other researchers at FHI call “superintelligent AI.” In the institute's foundational paper, “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” program director Nick Bostrom defined superintelligence as a “transcending upload"—a mind that's been transferred into a powerful computer. (He notes that it's also possible to achieve human-level AI without uploading the brain, but that would take much longer.)

It’s the standard scenario for what Ray Kurzweil and other transhumanists have dubbed the singularity—an idea still written off by some tech figures, like Microsoft co-founder Paul Allen, as speculative schlock, but that futurists like those at FHI take very seriously. They're tasked with researching the potential ramifications of the AI revolution so that we're prepared to deal with the fallout of an unpredictable “intelligence explosion,” and hopefully avoid “extinction by side-effect."

Advertisement

Yet Dewey is an optimist first and foremost. When I asked him what an existential risk is, he expounded on humanity’s potential for a bright future. Humans have the potential to exist as long as the universe does, and to expand our civilization into its vastness, he said. “This means that there is much more potential for goodness—happy and fulfilled lives, meaningful relationships, art, fun—in the long-term future than in the short-term.”

An "existential" risk, as defined by Bostrom in a 2002 research paper published in the Journal of Evolution and Technology, is a risk that threatens a "significant chunk of that long-term value." The FHI is just one of a number of organizations currently preparing for humanity’s potential death-by-AI, including the Cambridge Centre for Existential Risk, the Machine Intelligence Research Institute, and the Future of Life Institute.

“Success in creating AI would be the biggest event in human history,” wrote Hawking. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

Those risks include extreme wealth inequality in the future, or a major bioengineering accident, or a potentially species-ending disaster.

It could eventually take over resources that we depend on to stay alive, or it could consider us enough of a danger to its task completion that it decides the best course is to remove us from the picture.

“Artificial intelligence, in the forms that it already exists, has had a noticeable effect on society, and there are reasons to expect it to have enormous effects in the future,” said Dewey. “It could play a role in human extinction, persistent inequality, or other existential risks … If we understand the risks better, we can make better decisions to avoid them or mitigate the damage.”

Advertisement

One of the biggest concerns is “accidental misuse”—if core flaws in a program’s design accumulate, amplify, and ultimately become catastrophic. The base functions of AI, explained Dewey, are accumulating knowledge and resources, self-preservation, and self-improvement. Continually enacting these base functions is what would allow an AI machine to be adaptable enough to complete all other tasks, and thus truly intelligent.

Whatever specific task an intelligent machine is set to do, it’s reasonable to expect it could accomplish it better by working to protect itself from real-world threats. Hence Dewey’s invocation of “extinction by side-effect”: A super-intelligent AI may not try to drive humanity to extinction simply because it's smarter and better fit to rule, but because it is merely completing its basic functions.

“A super-intelligent AI—if it turns its power to gathering resources or protecting itself, would have an immense impact on the world,” he said. “It could co-opt our existing infrastructure, or could invent techniques and technologies we don't yet know how to make, like general-purpose nanotechnology. It could eventually take over resources that we depend on to stay alive, or it could consider us enough of a danger to its task completion that it decides the best course is to remove us from the picture. Either one of those scenarios could result in human extinction.”

Despite his firm belief that humanity should prepare itself for the worst, Dewey said he’s aware that he and his contemporaries could be wrong. “Of course, this could all turn out not to be true for some reason that we do not yet know, but given its potential impact, the fact that intelligence explosion seems reasonably possible gives us a very good reason to do further research on it.”

Advertisement

But most people still dismiss that dystopian future-world as the stuff of science fiction, including notable public intellectuals like Noam Chomsky. Critics claim that science is, in reality, nowhere near achieving the required technological advances for super-intelligent machines. Or even if we were, as University of Amsterdam media professor José van Dijck wrote in his paper, “Memory Matters in the Digital Age,” the human brain is not analogous to a computer and so computers could never be analogous to the human brain.

Others worry that pop sci-fi depictions of the technology, like Transcendence, can oversimplify a potentially doomsday scenario. “Much better, I think, to actually study the thing and see what we can find out about it scientifically, than to do literary analysis,” said Dewey.

Even if the singularity never arrives, experts say artificial intelligence could still have massive destabilizing effects on society—if we’re not ready for it.

“It will almost certainly have huge economic implications,” said Dewey. “Such machines could replace the vast majority of the human labour force. It could also cause massive political upheaval, as countries compete to benefit from the massive technological windfall and global structural changes. If we're not prepared for this type of transition, economic and political tensions could lead to huge inequalities, displacement, or war.”

Here, Dewey’s concerns don’t seem so much like the naïve sci-fi futurism that detractors paint his research as. AI's nascent effects are already being felt. Robots with limited artificial intelligence are currently replacing humans in workplaces around the globe. If the trend continues, without proper analysis, oversight, and forward thinking, we could soon be looking at a world upturned by unchecked technological progress.

Unfortunately, it’s very difficult at the moment to prescribe solutions or precautionary measures regarding the development of artificial intelligence—because, quite simply, we don’t know enough yet. Hence, Stephen Hawking called for more research like FHI’s into the ramifications of AI on society in his recent op-ed.

“As a public good, societies and their governments should support and invest in research on intelligence explosion and super-intelligent AI,” Dewey echoed. “When we understand better how intelligence explosion could come about and how super-intelligent AI could be managed, we will be able to take meaningful precautions against existential risks from AI.”

Hey, the future of humanity could depend on it.