FYI.

This story is over 5 years old.

Tech

It's Way Too Early to Be Afraid of a Superintelligent AI

Any AI worth fearing will have to be very different from what we’re building now.
Image: Ex Machina

Earlier this month, when Stephen Hawking was answering questions from the public on Reddit, he told a parable.

"The real risk with AI isn't malice but competence," the astrophysicist wrote. "A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.

"You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants.

Advertisement

"Let's not place humanity in the position of those ants."

Hawking has been a longtime advocate of caution in developing artificial intelligence. He's in good company: between Hawking, Bill Gates, and Elon Musk (not to mention Hollywood), ominous prognostications about artificial intelligence has become quite a trend. With all this high-profile attention, it starts to sound like if we don't act now, AI really might destroy us.

The doomsday narrative is not a totally crazy one. As we keep improving artificial intelligence, the thinking goes, we'll eventually invent a "general AI," a machine with the full reasoning capabilities of a human. Such a machine could reason about its own construction, just like its human creators. So presumably, it could keep redesigning itself to be smarter and better at becoming smarter, skyrocketing to unimaginable levels of intelligence.

The little-known reality of modern AI is that almost all of it is equally brittle

It's a compelling story. In fact, I'll step out on a limb here: I think there's a decent chance machine intelligence will ultimately exceed human intelligence, and that we'll need techniques to contain it. And yet, as a graduate researcher in artificial intelligence, I throw my lot in with actual experts in the field: I believe the urgency of the message is profoundly misplaced.

The problem lies not with the long-term prediction, but the timeline. Some warnings explicitly assert that the danger is imminent: Elon Musk, for example, has claimed that AI capable of serious damage is five to ten years away. Others acknowledge that we may have longer, but argue that we won't see it coming: AI will jump from unimpressive to mind-boggling far faster than it went from pathetic to unimpressive.

Advertisement

Although both arguments seem plausible, each makes two critical assumptions:

  • Progress to date represents significant advances toward general AI.

  • We're actively working on creating a general AI.

Fortunately for humanity, each of those assumptions is somewhere between exaggerated and false. Given the true state of AI research, we can rest easy for quite some time.

We've made progress, but down a different path
I spend my days coaxing computers to interpret human language. As research, my work is supposedly at the forefront of AI. Yet I recently spent a week trying (unsuccessfully) to get my system to stop failing miserably on sentences containing the word "and."

Of course, not every system suffers from this particular weakness. But the little-known reality of modern AI is that almost all of it is equally brittle. Change one assumption, and these systems are… well, not quite back to square one, but pretty close. As impressive as they are, they're shot through with highly specific shortcuts to enable them to perform one task extremely well. It's this sort of intelligence, or "narrow AI," that we've gotten good at.

Although narrow AI overlaps in function with general AI, the two are fundamentally different avenues of research. By definition, a narrow AI handles just one task: it learns to, say, classify images of vehicles, and that's it. A general AI, on the other hand, would need to share what it's learned between many tasks. If it's learned to tell Jeeps from golf carts, it may want to infer which could transport more shiny new processors.

Advertisement

While this would be trivial for a human, existing AI techniques rule out such generalization. You might try to cobble together a bunch of narrow AIs, but what each one learns is so task-specific that it's useless for other purposes.

Researchers have recently started exploring techniques such as deep learning and transfer learning that might evolve into such cross-domain learning down the road. But this is not a path we're halfway down; it's one we're only just beginning to tread.

Furthermore, current AI systems are rigidly fixed in their functionality; they have no way of modifying their own programming to change what or how they learn. Reasoning about self-redesigning systems is so insanely complex that almost nobody has even touched this problem. If cross-domain AI is in its infancy, AI that can improve its own programming is a twinkle in the eyes of parents who just met.

Nobody's even trying
Of course, you could imagine that with intensive research, those problems could be solved within a decade or two. But that's the second problem with the warnings: nobody's really trying to create a general AI. (There are tiny pockets of researchers and entrepreneurs who claim to be, but they're a fringe of the AI community whose work is rarely considered impressive.) The success of narrow AI has been spectacular, and there's still so much low-hanging fruit that the incentives to work on harder, broader problems are scant.

Creating intelligence is hard. It's not something we'll spontaneously achieve by accidentally zapping a robot, or something a rogue genius will conceive in a secretive mansion estate. It's going to take decades of dedicated work by thousands of researchers. If we're not working on general AI, or if all we've got are a few scrappy startup employees, it's not going to happen.

If cross-domain AI is in its infancy, AI that can improve its own programming is a twinkle in the eyes of parents who just met

What all this means is that a) at best, we're at the starting line for general AI, not on the verge of completing the course; and b) we have basically no idea what a general AI would look like. That makes it both unnecessary and virtually impossible to prepare for. As AI titan Andrew Ng argues, trying to mitigate the risk now would be like "work[ing] on combating overpopulation on…Mars."

So let's keep our expectations realistic. Sure, we can invest a little effort into considering possible futures, but the doomsday issue is far from urgent. Once the prerequisites for general AI become a serious and fruitful line of research, then maybe we'll have a pressing need. But now is not that time.