FYI.

This story is over 5 years old.

Tech

Elon Musk Hopes These Researchers Can Save Us from Superintelligent AI

37 research teams are splitting $7.2 million to research how to keep machines friendly.

So you know how some noted uber-billionaires and academics have said time and time again that people should have a more vested interest in making sure robots don't literally turn us into pets? Turns out some top researchers are finally looking into stopping that from happening.

Elon Musk, who regularly discusses more than a few apocalyptic portents against uncontrollable artificial intelligence, pledged in January to donate $10 million to the Future of Life Institute, a group dedicated to making sure artificial intelligence does what it's supposed to. Wednesday, researchers are finally getting a piece of that money.

Advertisement

The institute granted $7.2 million to 37 research teams across the globe, with the bulk of it coming from Musk's fund and an additional $1.2 million coming from the open philanthropy project. The grants, which range from $20,000 to $1.5 million, will be used to build AI safety constraints, develop a standard ethics system for AI development, and "align superintelligence with human interests," among dozens of other projects.

"They could do us great harm if they are not well designed"

If you're wondering what some of these projects entail, here are a few descriptions:

Our team plans to combine methods from computer science, philosophy, and psychology in order to construct an AI system that is capable of making plausible moral judgments and decisions in realistic scenarios. We hope that this work will provide a basis that leads to future highly-advanced AI systems acting ethically and thereby being more robust and beneficial.

"There is a growing concern over the deployment of autonomous weapons systems, and how the partnering of artificial intelligence (AI) and weapons will change the future of conflict. Bringing together computer scientists, roboticists, ethicists, lawyers and diplomats, the project will produce a conceptual framework that can shape new research and international policy for the future. Moreover, it will create a freely downloadable dataset on existing and emerging semi-autonomous weapons. Through this data, we can gain clarity on how and where autonomous functions are already deployed and on how such functions are kept under human control.

"'I don't know' is a safe and appropriate answer that people provide to many posed questions. To appropriately act in a variety of complex tasks, our artificial intelligence systems should incorporate similar levels of uncertainty. We propose a more pessimistic approach based on the question: 'What is the worst-case possible for predictive data that still matches with previous experiences (observations)?' We propose to analyze the theoretical benefits of this approach and demonstrate its applied benefits on prediction tasks."

Advertisement

The goal, in short, is to make sure that there are checks in place to make sure machines don't harm humans.

"The danger with the Terminator scenario isn't that it will happen, but that it distracts from the real issues posed by future AI", FLI president Max Tegmark said in a statement. "We're staying focused, and the 37 teams supported by today's grants should help solve such real issues."

After receiving Musk's initial outlay, the institute issued an open letter calling for engineers and creators to build more "robust and beneficial artificial intelligence." The remaining funds will be awarded to promising projects.

"I think most people are afraid of AIs that will turn against us or try to take over the world," Peter Asaro, a New School researcher who received a grant for $116,974, told me in an email.

"The reality is that they will be largely indifferent to us. But that also means they could do us great harm if they are not well designed," he added. Asaro's project will delve into the legal challenges that come with developing autonomous artificial intelligence.

Stuart Russell, a computer science professor at the University of California, Berkeley, echoed Asaro's sentiments. His project, titled "Value Alignment and Moral Metareasoning," will attempt to give machines a sort of human value system to help them make decisions.

"The actual problem is a high degree of competence ('superintelligence') coupled with objectives that are not perfectly aligned with those of humans—[perhaps] because they have been mis-specified by the machine's designers," he told me in an email.

Asaro thinks that Musk's futuristic interests underscore a growing fear that we'll just throw all checks and balances out the window in pursuit of exponential progress.

"From where he has invested his time, energy and money—in electric cars, solar energy, space exploration—[Musk] sees the transformative power of technological innovation," he said. "But that power could serve many interests, or fail to benefit humanity in general, or actually harm particular people."

Whatever the case, let's agree on one thing: If any of these projects keep rogue machines from destroying the human race, that $7 million will have been well spent.