Will Superintelligent AI Ignore Humans Instead of Destroying Us?
Image: Trenton Shuck/Deviant Art

FYI.

This story is over 5 years old.

Tech

Will Superintelligent AI Ignore Humans Instead of Destroying Us?

Artificial intelligence might treat us like we treat bugs. Then what happens?

We're marching toward the singularity, the day when artificial intelligence is smarter than us. That has some very smart people such as Stephen Hawking and Elon Musk worried that robots will enslave or destroy us—but perhaps there's another option. Perhaps superintelligent artificial intelligence will simply ignore the human race.

On its face, it makes sense, doesn't it? Unless you're some sort of psychopath, you don't purposefully go out and murder a bunch of insects living in the great outdoors. You just let them be. If AI becomes much, much smarter than us, why would it even bother with interacting with us lowly humans?

Advertisement

That interesting idea is put forward in a blog post by Zeljko Svedic, a Croatian tech professional who started a company that automatically tests programming skills. Svedic is quick to point out that he's not an artificial intelligence researcher and is something of an amateur philosopher, but I was fascinated by his post called "Singularity and the anthropocentric bias."

Svedic notes that most interspecies conflicts arise out of competition over resources or out of what he calls irrational behavior for the long-term survival of a race. Destroying the Earth with pollution probably isn't the smartest thing we could be doing, but we're doing it anyway. He then suggests that a superintelligent race of artificial intelligence is likely to require vastly different resources than humans need and to be much more rational and coldly logical than humans are.

Sure, a Skynet equivalent could decide that it wants Earth all to itself and immediately start a war on humans. But wouldn't it make more sense, Svedic posits, for artificial intelligence to more or less immediately leave the Earth in search of a place more hospitable for electronics? We'd be the bugs, they'd be the superintelligent race, and they would simply leave us alone, right?

"Because a superintelligence won't share our requirements, it won't have problems colonising deserts, ocean depths, or deep space," he wrote. "Quite the contrary. Polar cold is great for cooling, vacuum is great for producing electronics, and constant, strong sunlight is great for photovoltaics. Furthermore, traveling a few hundred years to a nearby star is not a problem if you live forever."

Advertisement

The obvious counterpoint to this argument is the "paperclip maximizer" idea put forward by Nick Bostrom, a British philosopher famous for his thinking on superintelligent AI. Bostrom's papers have heavily influenced Hawking and Musk in their fear of AI, and, in this scenario, a breed of superintelligent AI has a completely arbitrary goal that humans can't understand because the robots are far smarter than we can comprehend.

There's about to be a bunch of paperclip talk, but stay with me: The paperclip maximizer is a thought experiment in which this breed of artificial intelligence wants to make as many paperclips as is possible, and it stops at nothing to achieve those goals. Weird scenario, yeah, but you can replace paperclips with some other AI goal, and the metaphor holds up.

In this scenario, a breed of artificial intelligence would ignore mankind as if we were bugs. But think of how many bugs mankind has destroyed as we've developed the Earth. In the paperclip maximizer hypothesis, the AI makes paperclips, stripping the universe of whatever it needs to make hella paperclips. Well, not hella paperclips, the most paperclips as is possible to be made.

Svedic dismisses the paperclip hypothesis by simply suggesting that Earth probably isn't a very good place to make paperclips, or whatever else a superintelligent AI might want to make. And that, artificial intelligence experts say, is where Svedic's idea begins to show some holes. I sent Svedic's post to Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, a group that studies the risks of superintelligent AI.

Advertisement

"The problem with a paperclip maximizer is not that Earth's local resources are especially vital for making paperclips, or that the paperclip maximizer shall never be able to make any paperclips if it leaves humans alone," Yudkowsky said.

"The problem is that a paperclip maximizer always prefers to have one more paperclip, and Earth's resources can be used to make more paperclips," he added. "So long as there are no other consequences (like a risk to other paperclips elsewhere), the paperclip maximizer will prefer the outcome in which Earth has been transformed into paperclips to the outcome in which Earth was not transformed into paperclips, irrespective of whether the rest of the universe has been transformed into paperclips or not."

In other words, just because superintelligent AI could spare humans and leave Earth alone doesn't mean it'd have a reason to. And we'd be dealing with a coldly logical being—the destruction of mankind would be incidental. Who gives a shit about humans, when there is even one additional paperclip to be made?

Yudkowsky notes that superintelligent AI is, obviously, superintelligent. It would probably not destroy us immediately, as it would perform calculations and observe the universe to decide whether the Earth (or mankind) is worth destroying to achieve its goals, paperclips or otherwise.

In this scenario, the goal of the artificial intelligence is to maximize paperclips, not simply make a lot of paperclips. Yudkowsky says such a race of AI would not save Earth as a "habitat for humanity" simply because we've got a bunch of UNESCO World Heritage sites. The AI does not care about our culture or your feelings or anything else. The AI does not want to sit next to you at a baseball game. The AI does not have hobbies, and if it does have hobbies, its hobbies do not include long walks on the beach and "befriending humans." You do not spend your time conversing with beetles, do you? It cares about paperclips. It makes paperclips.

Advertisement

"This artificial intelligence is not a basically nice creature that has a strong drive for paperclips, which, so long as it's satisfied by being able to make lots of paperclips somewhere else, is then able to interact with you in a relaxed and carefree fashion where it can be nice with you," Yudkowsky said. "Imagine a time machine that sends backward in time information about which choice always leads to the maximum number of paperclips in the future, and this choice is then output—that's what a paperclip maximizer is."

This is a lot of paperclip talk, admittedly, but the metaphor extends to whatever other thing a superintelligent AI would want. Skynet will calculate every possible outcome, choose the one that suits it best, and then do that action. Nothing else will matter.

This scenario, of course, still probably ends in our destruction. And, in a sense, this is one of the best possible outcomes for humanity. It assumes, as Svedic suggested, that the superintelligent AI has no obvious incentive to kill humanity. If humans, for some reason, end up being really good at making paperclips, maybe the robots will enslave us or keep us as pets, as Elon Musk suggested earlier this year, rather than simply destroy us. (This was Musk's best case scenario.)

But how would humanity react to a breed of superintelligent AI? In the early days of the singularity, we would probably want to stop this monster we've created, Yudkowsky said. To go out on a limb, mankind would probably attempt to threaten or destroy this AI, maybe with other AI. Any sort of competing AI would be seen as a threat, at which point the paperclip maximizer would have incentive to kill us.

Advertisement

James Barrat, author of Our Final Invention, agreed that Svedic's article and theory are interesting, but came to a similar conclusion as Yudkowsky.

"Here's the problem. AI's mere ambivalence towards humans would doom us. Contrary to what that article states, an AI would want the same resources we do—energy, elements, even cash if that promoted its ability to achieve its goals," Barrat told me. "Computers don't run on nothing, they need power. Superintelligent machines would need resources and lots of them. They'd logically develop nanotechnology—as I write in my book—to efficiently turn matter at an atomic level into whatever computational resources they need. So imagine the atoms of our environment being repurposed at an atomic level. It would certainly challenge our survival."

It's a nice thought that humans could one day create a superintelligent artificial intelligence, and that intelligence takes a look at us, says "thanks, creator," and blasts off into space, never to be heard from again. Or maybe the AI moves to the deserts or the Arctic or some other uninhabited place, and we live together peacefully. But it seems like such an outcome is unlikely.

The most optimistic response I got, in terms of humanity's future, was from Selmer Bringsjord, chair of the department of cognitive science at Rensselaer Polytechnic Institute.

"Superintelligent AI is probably mathematically impossible (this assumes that such AI is based on the math we currently base AI on)," he told me in an email. "Speed doesn't equate to intelligence. Most of those who take all this superintelligent AI business seriously have a speed fetish. They forget that you have to have a function in the first place, before speed pays dividends in the computing of that function."

I suppose if we don't want to become paperclips some day, we should hope Bringsjord is right. In the meantime, you can take solace that, somewhere, the singularity has probably been achieved, and that the dominant life form in the universe is likely a bunch of superintelligent robots, be they paperclip manufacturers or otherwise.