FYI.

This story is over 5 years old.

Tech

Even AI Scientists Can't Agree on AI Risks

Will robots kill us all one day? A debate on Reddit.
Image: Flickr/Ash Carter

Nobody knows whether the robots will kill us all or not one day, perhaps especially not the scientists who are actually developing artificial intelligence. In a Reddit AMA, or "ask me anything," session today, three computer scientists had two completely different answers to the question.

The three scientists, based at labs in the US, France, and Brazil, definitely have the cred to talk about the issue. Recently, they developed a technique to overcome the problem of "catastrophic forgetting" with AIs. That is, the tendency for artificial neural networks—self-learning computer simulations that model neurons in the human brain—to forget something they just learned when they try to learn something new. So, they know their shit.

Advertisement

Reddit user "Bashos_Sword" asked the scientists how they would "combat" the fears about AI ending the human race currently being stoked by popular tech and science luminaries like SpaceX founder Elon Musk and Stephen Hawking. They got a mixed response.

"Human brains are very complex machines, but, whether it takes years or centuries, we will eventually figure out how to create machines as smart and likely smarter," wrote University of Wyoming researcher Jeff Clune, with agreement from Brazil-based scientist Kai Olav Ellefsen. "Once we do, it is an open question whether those machines will live symbiotically with us or try to wrest control from us. If they do wrest control, it is anyone's guess what they would try to do to us."

Basically, Clune wrote, It's only a matter of time before a machine can outsmart us. Even if it's a century from now—DARPA wants to develop software that can evolve and survive for at least that long, by the way—we have to take steps to "keep the genie in the bottle."

But Jean-Baptiste Mouret, the France-based researcher of the trio, disagreed with his colleagues. According to him, just because a computer can

sort of play poker

doesn't mean that it's going to be convincing you that you

live in a simulation

of its own design any time soon. The difference is that between a single, narrow task, and the ability to do many tasks and "think" in general terms.

"First, I think we are far from any 'general artificial intelligence' and the successes of AI on some specific tasks (like face recognition, logics, etc.) does not say anything about how close we are from a general AI," Mouret wrote. "Second, I am much more frightened by all the autonomy we give to dumb machines: when an autonomous machine can take the decision to kill someone, I would prefer it to have a smart AI than a very dumb one."

Here, Mouret takes the perspective of the down-in-the-dirt computer scientist trying to make a simple computer program that can "learn" more than one skill at a time, instead of the more philosophical approach favoured by AI apocalypse seers like Nick Bostrom, who penned the now-foundational text, "Superintelligence: Paths, Dangers, and Strategies."

Even on Reddit, there appears to be significant disagreement between scientists—even ones who work together on AI—as to the risks posed by intelligent machines. Hell, some of these folks can't even agree on what a robot is, after all.