FYI.

This story is over 5 years old.

Tech

Artificial Intelligence Might Not Kill Us After All

So long as it does what we want it to.

Renowned theoretical physicist and AI ala​rmist Stephen Hawking is among signatories on a new op​en letter that sets out a desired future for artificial intelligence research.

The letter, written by the Future of Life Institute—a volunteer organisation that proclaims its mission "to mitigate existential risks facing humanity"—is signed by several hundred people, among them academics from around the world, AI researchers from companies including Google, a few names from Singularity University, and SpaceX CEO Elon Musk, who has previously warned of impendin​g robotic doom.

Advertisement

But while earlier messages have focused on the more apocalyptic potential of intelligent machines—in a previous article f​or the Huffington Post, Hawking and some of the other signatories warned that creating AI might be the last thing humanity did—this letter, and attached research pap​er, has a more positive tone. It focuses on where AI research could be most beneficial. Provided we keep it under control.

"The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable," the authors state. The point of the research priorities they lay out is to ensure AI systems head in this positive direction. As they put it, "our AI systems must do what we want them to do."

Short-term priorities backed by the researchers include maximising economic benefits by looking at the impact of automation on the job market; exploring ethical and legal issues such as liability, privacy, and the use of autonomous weapons; and an emphasis on making systems that are fundamentally safe to begin with.

Our AI systems must do what we want them to do

The safety issue takes front stage in the team's longer objectives. They write that, "A frequently discussed long-term goal of some AI researchers is to develop systems that can learn from experience with human-like breadth and surpass human performance in most cognitive tasks, thereby having a major impact on society."

Advertisement

If there's any "non-negligible" chance that we could get to this point, they argue, then more research is needed to make sure that AI stays "robust and beneficial." If something's cleverer than us, we want it on our side.

The paper generally outlines questions that need to be considered right from the design phase of intelligent systems—building the right systems the right way, with the right security and control built in.

That last one—control—is the kind of concern that leads to end-of-the-world rhetoric we've been getting used to hearing on the issue of AI. What if it turns against us? The more autonomous a machine, the more likely this scenario seems, which is why the authors suggest research is needed into systems that won't slip out of our command.

It's a (slightly) more optimistic approach to the AI future that nevertheless remains realistic; sure, this technology carries risks, but if we start looking at how to address them now, maybe we'll be OK.

If we can harness AI to tackle issues like poverty and disease, maybe we'll even be better than OK.