FYI.

This story is over 5 years old.

Tech

We Need to 'Chill' About Artificial Intelligence, Says Founder of Robot Company

But it will take your job.
The Baxter robot. Image: AEMC Summit/Wikipedia

Artificial intelligence is something of a lightning rod in the media. On one hand, tech and science luminaries like Elon Musk and Stephen Hawking have warned the public about future artificial agents that will wipe out humanity. On the other, we have Rodney Brooks, founder of Rethink Robotics—the company developing Baxter, a robot whose only function is putting people out of work—who thinks we should all just "chill."

Advertisement

In a blog post posted on Monday to Rethink Robotics' site, Brooks lays out an excellent case—or as I like to call it, "internet suplex"—against fear-mongering about a potential robot intelligent enough to A) recognize us as human and understand what that means on the level of cognition, and B) want to destroy us.

His central points aligns with what I argued that Elon Musk is almost certainly wrong about AI: Essentially, we're nowhere close to creating an intelligent machine. In part, it's because the technology is not yet there, and also because we don't really know how the brain works at the level of specific processes and thus can't model it. Here's an excerpt from Brooks' post:

I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data. […] While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn't help very much at all in "knowing" what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness. And deep learning does not help in giving a machine "intent", or any overarching goals or "wants". […] Malevolent AI would need all these capabilities, and then some.

The great irony in Brooks' statement, however, is that his company is developing a kind of limited artificial intelligence that could soon irrevocably restructure the workforce, likely leaving many of those affected without jobs. And, research is showing, the affected will most likely be poor people stuck in jobs that already underpay them—a reality that addressed by robo-Marxist theory. We have plenty to worry about.

What's an interested party to do when very smart people are saying completely different things about artificial intelligence? Musk and his robots-will-kill-us-all crew are predicting a cyber apocalypse in the not-so-distant future, while Brooks says we have absolutely nothing to worry about. Both extremes are most likely to be wrong, but Brooks is perhaps more right in his measured approach. But, you know, still wrong.

We need to hit a discursive middle ground when publicly talking about AI. That means having measured expectations when it comes to the future of the technology, while rigorously and actively thinking through and confronting the terrifying possibilities that the AI of today will bring in the coming years.

Right now, we have killer drones, an increasingly mechanized workforce, and an economy running at the speed of fiber optics. These technologies are only going to become better and more pervasive in the future, and we must consider their potentially destructive effects. In the end, both Musk's and Brooks's comments are a dodge of a similar kind that downplays the very real and immediate threats we face from AI, nearly all of which are politically or economically motivated.