FYI.

This story is over 5 years old.

Tech

There's a Bit of a Flaw in the Way Artificial Intelligence Is Being Developed

Despite having Google, Facebook, and the US government on the case, the reality is we're a long way from creating an artificial brain.
Image: Michael Cordedda/Flickr

Computers that can think and learn like humans have long been a dream of wild-eyed futurists, but despite having Google, Facebook, and the US government on the case, the reality is we're a long way from creating an artificial brain.

As a nice little reminder of this, a recent study found a design flaw at the core of likely every image-recognizing Artificial Neural Network (ANN), a flaw that can significantly slow the progress toward the fantastical innovations that artificial intelligence promises.

Advertisement

ANNs are algorithms that can teach themselves to recognize patterns and objects by building connections between points of data. Unlike other kinds of AI machine learning, they work with "unlabeled data”—data and images that haven’t been pre-selected and classified by human supervisors.

The ultimate goal of digital neural networks is for computers to “think”—to reason, decide, understand information on their own, without the answer being programmed. Google is at forefront of the research, currently feeding its network thousands of YouTube videos and Street View images to build the “Google knowledge base." So far, it’s learned what a cat is, and it can identify house numbers.

A lot of the future tech we see when we stare into the future depends on advances in this technology. Google hopes that image-recognizing ANNs will improve projects like autonomous cars and Google Glass by allowing machines to recognize more objects faster. Imagine, for example, a Glass app that can recognize every object you own and learn new ones—you'd never lose your keys again—or an autonomous car that can safely navigate obstacles it was never programmed to encounter.

Facebook has integrated similarly autodidactic ANNs into its platform to model connections between users and recognize people in photos. “It’s currently to the point where a lot of machine learning is already used on the site—where we decide what news to show people and, on the other side of things, which ads to display,” Yann LeCun, the NYU researcher Facebook hired to develop its ANN tech, told Wired. The end goal, he said, is to create a network so robust that it can effectively model what a human brain is interested in viewing on its News Feed.

Advertisement

The Defense Department is looking into the technology, too. DARPA is using an ANN algorithm to build a network that can identify people and objects in images found on confiscated recording devices in the battlefield. They’re also hoping to expand the project’s scope by building a fully modeled digital brain.

Unfortunately, the innocuously titled study, “Intriguing Properties of Neural Networks,” found that neural nets have got a problem. In fact, they’ve probably got millions, and they’re called “adversarial examples.”

The team of researchers from Google, Facebook, and academia found that if they first presented the computer-brain with an image it could recognize and then modified the pixels ever so slightly, the algorithm could be tricked into misclassifying the second image, even though if placed side by side, the images would look identical to the human eye.

The team developed a formula to generate these adversarial examples and found that while there’s a low probability of them occurring in test data, in the wild they’re actually very dense. That means that, statistically, next to every classifiable image is a virtually indistinguishable one that a neural network can’t recognize.

What’s more, this isn’t a design flaw unique to one system—the adversarial examples were constantly misclassified across several ANNs, and they’re likely universal, the study found.

It’s all a little bit technical, but it could have major implications for the future of machine learning. The study’s findings mean that labs developing digital neural networks basically have to backpedal and train their systems all over again to recognize adversarial examples. If they don’t, well, it’s not hard to imagine how things might not end well for a person DARPA misclassified as a building to be bombed.

Potentially life threatening and otherwise far-fetched consequences aside, the big takeaway from the study is that we should be careful not to buy into hype surrounding artificial brain tech. The human mind is extremely difficult to reverse engineer, and we’re not anywhere close to fully modeling its intricacies.