FYI.

This story is over 5 years old.

Tech

The Divide Between People Who Hate and Love Artificial Intelligence Is Not Real

An interview with Max Tegmark, author of "Life 3.0: Being Human in the Age of Artificial Intelligence."
Image: Willyam Bradberry/Shutterstock

Philosophers, computer scientists, and even Elon Musk are concerned artificial intelligence could one day destroy the world. Some scholars argue it's is the most pressing existential risk humanity might ever face, while others mostly dismiss the hypothesized danger as unfounded doom-mongering.

It might seem like there are two competing schools of thought: the AI pessimists and the AI optimists. But this dichotomy is misleading. The people who are the most worried about AI also believe that if we managed to create a "friendly" superintelligence—meaning that its goal system was aligned with ours—then it could be incredibly positive for our world. To paraphrase Stephen Hawking, if the creation of superintelligence isn't the worst thing to happen to humanity, it will likely be the very best.

Advertisement

In the new book Life 3.0: Being Human in the Age of Artificial Intelligence, MIT cosmologist and co-founder of the Future of Life Institute Max Tegmark offers a sweeping exploration of a wide range of issues related to artificial intelligence—from how AI could affect the job market and how we fight wars, to whether a super-smart computer program could threaten our collective survival. I talked to Tegmark about the threats and possibilities of AI over Skype.

The following interview has been condensed and edited for clarity.

Motherboard: Many leading experts are worried about the possible negative consequences of AI. Where do you fall along the spectrum of anxiety?

Tegmark: I'm optimistic that we can create an inspiring future with AI if we win the race between the growing power of AI and the growing wisdom with which we manage it, but that's going to require planning and work, and won't happen automatically. So I'm not naively optimistic, like, about the Sun rising tomorrow regardless of what we do, but I'm optimistic that if we do the right things, the outcome will be great. To me the most interesting question is not the quibble about whether things are more likely to be good or bad but to ask what concrete steps we can take now that will maximize the chances of it being good.

Can you give me some examples of such concrete steps?

First, fund AI safety research; second, try to negotiate an international treaty limiting lethal autonomous weapons; and third, ensure that the great wealth created by automation makes everybody better off. With respect to AI safety research, how do we transform todays buggy and hackable computers into robust AI systems that we can really trust? When your laptop crashes, it's annoying, but when your AI controlling your future self-flying airplane or nuclear arsenal crashes, "annoying" isn't the word you'd use!

Advertisement

As we go farther into the future it's going to be really crucial to figure out how to make machines understand our goals, adopt our goals, and retain our goals as they get smarter. If you tell your future self-driving car to take you to the airport as fast as possible and you get there covered in vomit and chased by helicopters and you say, "No! That's not what I asked for," and it replies [here Tegmark does a robot voice] "That's exactly what you asked for," then you've understood how hard it is to make machines understand your goals.

And third, my kids are much less excited about Legos now than when they were tiny, and we want to make sure that intelligent machines retain their goals as they get smarter so they don't get as bored with the goal of being nice to humanity.

With respect to governments, Hillary Clinton writes in her new book that she became convinced that AI poses a number of risks that cannot be ignored, but that she couldn't figure out how to talk about AI in a way that didn't make her sound crazy. What are your thoughts about how governments are dealing with AI risks?

Yeah, I thought the whole topic of AI was conspicuous by its absence in the last presidential election, even though it's obviously the most important issue for jobs.When we try to align the goals of AI with our goals, whose goals are we talking about? Your goals? My goals? ISIS's goals? Hillary Clinton or Donald Trump's goals? This is not something we can leave only to computer geeks like myself, because it affects everybody. This is obviously something where philosophers, ethicists, and everybody out there needs to join the conversation.

Advertisement

There's also a lot of questions where we really need economists, psychologists, sociologists, and others to join the challenge to make sure that everybody's better off thanks to this technology. Part of that involves figuring out how to make sure that the people whose jobs are automated don't get ever-poorer, as has been happening to some extent already—there are a lot of economists claiming that the inequality that gave us Trump and Brexit has been largely driven by automation. But it's not enough just to give people money, because you can't buy happiness. Jobs give us not just income but a sense of purpose and social connections. And it's a fascinating conversation about what sort of society we want to create, ultimately, so that we can use all this technology for good.

Speaking of Trump, are you worried right now about nuclear conflict? I found his improvised " fire and fury " comments perhaps the scariest thing I've ever heard from a US government official.

Yes, I think that nuclear weapons and AI are actually two examples of exactly the same issue: We humans are gradually inventing ever more powerful technologies and struggling more and more with making sure that our wisdom for managing advanced tech keeps up with the growing power of this tech. When we invented less powerful technology like fire, we screwed up a bunch of times and then invented the fire extinguisher. But with really powerful technology like nuclear weapons and superhuman AI, we don't want to learn from mistakes anymore.

Advertisement

Now, some people misconstrue this as Luddite scaremongering—to me, it's simply safety engineering. The reason that NASA managed to put astronauts safely on the moon is because they systematically thought through everything that could go wrong and figured out how to avoid it. That's what we have to do with nuclear weapons, that's what we have to do with very powerful AI also. When I look at how today's world leaders are handling 14,000 hydrogen bombs, I feel a lot like we've given a box of hand grenades to a kindergarten class.

This is why I like your description of our predicament being a race between wisdom and technology.

To a large extent wisdom consists of answers to tough questions. So I think it's absolutely crucial that we get people thinking about these tough questions to get answers by the time we need them. Some of the questions are so hard that it might take 30 years to answer them, which means that if there's some possibility that we might have superintelligence in 30 years, we should start researching it now, not the night before someone switches on a superintelligence.

If humanity gets its act together and wisdom wins the race, how good could the future be?

I wanted to write an optimistic book precisely because I feel there's been too little emphasis on the upsides—if people just focus on the downsides, they get paralyzed by fear and society gets polarized and fractured. If people instead get excited about a wonderful future, it can foster collaboration, which we sorely need in this day and age. Everything I love about civilization is the product of intelligence, so if we can amplify our intelligence with AI, we have the potential to solve all of the problems that are plaguing us right now, from incurable diseases to how to get sustainable energy and fix our climate, poverty, justice—you name it.

And as a physicist at heart, I also can't resist the bigger picture: Here we are on our little planet, quibbling about the next election cycle, when we actually have the potential for life to flourish for billions of years here on Earth and throughout the cosmos, with hundreds of billions of solar systems and hundreds of billions of galaxies. The upside is mind-boggling. It's way beyond what science fiction writers used to fantasize about.

And to people who say, "Oh no, this is scary. Let's stop technology," this is not only unrealistic, but it's really just a suicide recipe, because if we don't improve our technology, the question isn't if we're going to go extinct, but whether we're going to get wiped out by an asteroid strike, a super volcano, or something else.

Get six of our favorite Motherboard stories every day by signing up for our newsletter .