The SpaceX and Tesla founder continues to fear the singularity.
It's no secret that modern-day renaissance man Elon Musk fears what could happen if super intelligence robots self actualize—he once said that we're " summoning the demon" with AI research—but Sunday, Musk detailed exactly how he thinks how artificial intelligence could end humanity.
Musk, the founder of Tesla and SpaceX, went on StarTalk, Neil deGrasse Tyson's weekly podcast, to talk about all things future, and AI was a main topic of conversation.
"I'm quite worried about artificial super intelligence these days. I think it's something that's maybe more dangerous than nuclear weapons," Musk said. "We should be really careful about that. If there was a digital super intelligence that was created that could go into rapid, recursive self improvement in a non logarithmic way, that could reprogram itself to be smarter and iterate really quickly and do that 24 hours a day on millions of computers, then that's all she wrote."
If artificial intelligence is going to kill us, that's generally the way it's been predicted in sci-fi movies and by people who ponder such things: The intelligence will self actualize and teach itself how to become much, much smarter than us, and then it will be uncontrollable (work on superintelligence is ongoing). Musk said that we have to consider why, exactly, we're trying to make super intelligent robots.
If it's for human companionship and to make us happier ( as detailed in Her, which you should watch if you haven't), well, maybe that won't end so well.
"The utility function of the digital super intelligence is of stupendous importance. What does it try to optimize? We need to be really careful with saying, 'how about human happiness?'" Musk said. "It can conclude that an unhappy human should be terminated. Or that we should all be captured and [constantly] injected with dopamine and serotonin to optimize happiness. I'm just saying we should exercise caution."
Tyson asked Musk if he thought they'd domesticate us: "We'll be like a pet labrador if we're lucky."
Of course, Musk wasn't just spitballing. This is something he's thought about for a long time, and it's one of his more controversial talking points. He's even spent $10 million trying to keep ultra intelligent AI from becoming dangerous.
Tyson and his cohost for the episode, Bill Nye, are of course no slouches when it comes to thinking about the implications of tech, and both of them thought his theories were a little out there.
"Twenty percent of the world's population does not have electricity. They've never made a phone call," Nye said. "I think people have to keep in mind—computers are so reliable—but somebody is literally or in a sense shoveling the coal. What happens if you unplug the supercomputer or intelligence?"
Tyson said that guys like Musk are worried super intelligent robots will prevent you from unplugging it, but Nye asked how a computer would "create that thing to keep you from doing that? It seems like a solvable problem."
Likewise, artificial intelligence researchers have said that Musk's comments has a chilling effect on research, and that we are far, far away from ever having to worry about being overrun by robots.
But the idea of creating digital software that goes haywire and is uncontrollable isn't just a nightmare scenario—it's already happened. Consider that Stuxnet, a computer virus believed to have been created by the NSA to physically damage Iranian uranium enrichment factories, has now spread far across the internet and has infected Russian nuclear plants.
Stuxnet is a very specific piece of software that doesn't do much on regular computers besides replicate and spread itself, but it's easy to imagine superintelligence spreading itself across connected devices around the world. And then, maybe the only answer is to pull the plug on computers that we need. And then where are we?