FYI.

This story is over 5 years old.

Tech

The Real Threat Is Machine Incompetence, Not Intelligence

Forget super-AI. Crappy AI is more likely to be our downfall, argues researcher.
globalsecurity.org

The past couple of years have been a real cringe-y time to be an AI researcher. Just imagine a whole bunch of famous technologists and top-serious science authorities all suddenly taking aim at your field of research as a clear and present threat to the very survival of the species. All you want to do is predict appropriate emoji use based on textual analyses and here's Elon Musk saying this thing he doesn't really seem to know much about is the actual apocalypse.

Advertisement

It's not that computer scientists haven't argued against AI hype, but an academic you've never heard of (all of them?) pitching the headline "AI is hard" is at a disadvantage to the famous person whose job description largely centers around making big public pronouncements. This month that academic is Alan Bundy, a professor of automated reasoning at the University of Edinburgh in Scotland, who argues in the Communications of the ACM that there is a real AI threat, but it's not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.

Bundy notes that most all of our big-deal AI successes in recent years are extremely narrow in scope. We have machines that can play Jeopardy and Go—at tremendous cost in both cases—but that's nothing like general intelligence.

"The singularity is predicated on a linear model of intelligence, rather like IQ, on which each animal species has its place, and along which AI is gradually advancing," Bundy writes. "Intelligence is not like this. As Aaron Sloman, for instance, has successfully argued, intelligence must be modeled using a multidimensional space, with many different kinds of intelligence and with AI progressing in many different directions."

The problem is that the public doesn't really get this because no one bothers to explain it. We have general intelligence and so we see a simulacrum of intelligence on TV and assume that it too involves something like general intelligence, even though a Go-playing computer is more or less doomed to an existence as a Go-playing computer. In Bundy's words: "Many humans tend to ascribe too much intelligence to narrowly focused AI systems."

Advertisement

So, yes: machine incompetence. Bundy has unusual standing on this point because he was one of a number of UK computer scientists who argued in the early-1980s against Ronald Reagan and Edward Teller's Strategic Defense Initiative, initially proposed as a closed-loop AI system (read: necessarily human-free) that would theoretically detect outbound Soviet ICBMs and blast them to pieces with lasers before they could re-enter Earth's atmosphere and end the world.

SDI seemed like a great idea to policy-makers, but what the policy-makers didn't understand was that the AI just wasn't there. False positives in prior early-warning missile detection systems were common and had been triggered by, among other things, flocks of birds and the moonrise. A false positive could mean nuclear war.

"Fortunately, in these systems a human was in the loop to abort any unwarranted retaliation to the falsely suspected attack," Bundy writes. "A group of us from Edinburgh met U.K. Ministry of Defence scientists, engaged with SDI, who admitted they shared our analysis. The SDI was subsequently quietly dropped by morphing it into a saner program. This is an excellent example of non-computer scientists overestimating the abilities of dumb machines."

Research scientists "need to publicize these lessons to ensure they are widely understood," Bundy adds.

He goes on to argue that AI will continue to develop in siloed form, where new and impressive machines continue to scare doomsayers for their abilities within relatively narrow task domains while remaining "incredibly dumb" when it comes to everything else.

The risk remains the same as it was in the 1980s, where the public and policy-makers see machines being amazing within these narrow domains, while never seeing how badly they fail when tasks become general and start to approach the edges of human cognition.