FYI.

This story is over 5 years old.

Tech

How Do You Teach a Computer Common Sense?

It may be a necessary skill for artificial intelligence to develop human-like reasoning.

Common sense is a funny thing. It's here that we as people co-existing within a given society reach unsaid agreements on basic features and interpretations of the world, whether "we" like it or not. It's a very conservative notion for one, arguably acting as a drag force on societal evolution, and one that marginalizes by definition. But it's also in a sense a living history of the present, the contested Wikipedia entry that is a collection of people trying to understand itself through often ugly and dated axioms. It may even be necessary for human reasoning.

Advertisement

What the hell could this possibly have to do with computer science? Ramon Lopez de Mantaras, director of the Spanish National Research Council's Artificial Intelligence Research Institute, believes that one of the central tasks in creating human-like artificial intelligence is in teaching machines common sense, or something a lot like it. "In the last 30 years, research has focused on weak artificial intelligence—that is to say, in making machines very good at a specific topic—but we have not progressed that much with common sense reasoning," he posited in a recent debate, according to IT Iberia's Anna Solana. General or common sense reasoning, the argument goes, is key to understanding the unanticipated, which is key to real intelligence.

Like many if not most artificial intelligence researchers, Lopez de Mantaras is dismissive of the "singularity," the highly sketchy notion that at some point in the near future machines will acquire the ability to simulate human brains, opening up a brave new world in which machines and humans can trade places, with all sorts of neat and scary science-fictional outcomes. Part of the problem, he argues, is that the brain is not exclusively an electrical device and additionally involves very analog and poorly modeled chemical processes. The other part is that no one has really figured out how to program common sense. We can teach a computer the axioms of geometry—that, say, two points exist on a line—but not so much those of society.

Advertisement

Speaking at a conference a couple of years ago in Barcelona, Lopez de Mantaras offered a very clear definition of the common sense problem and its implications. "The dominant model at the present time is the digital computer, though the World Wide Web has recently been proposed as a possible model for artificial intelligence, particularly to solve the main problem faced by artificial intelligence: acquiring commonsense knowledge," he said. "This knowledge is not generally found in books or encyclopedias and yet we all possess a great deal of it. One example is 'water always flows downhill.' This fact is not generally found in encyclopedias, yet nearly ten thousand websites mention it. The same is true of many other pieces of knowledge, which is why the Internet is considered to be the best repository of commonsense knowledge currently available."

As currently realized, artificial intelligence is mostly concerned with creating "idiot savants."

As currently realized, artificial intelligence is mostly concerned with creating "idiot savants"—machines that can do a few specialized tasks very well, but can't do general or unpredictable tasks in any way resembling human intellect. Describing even the most basic visual scene or processing simple sentences are at the edge of contemporary AI's capabilities.

"The most important lesson we have learned in the 55 years of existence of artificial intelligence may well be that what seemed to be the most difficult task (diagnosing diseases, playing chess at Grandmaster level) has turned out to be relatively easy and what seemed the easiest has turned out to be the most difficult," Lopez de Mantaras said.

Advertisement

Lopez de Mantaras and his group at the Artificial Intelligence Research Institute are working together with researchers at Imperial College London on a project designed to teach robots to play musical instruments using common sense reasoning. It's based on a musical instrument known as Reactable, which is similar to old-school modular analogue synthesizers. It consists of a large multi-touch enabled table that the robots make sounds with by manipulating different physical objects in different ways and by making different connections between those objects.

The idea is that by learning how to accomplish different things within the Reactable environment, the AIs are also learning general "common sense" concepts. For example, a machine might learn that to use a rope to move an object, one needs to pull the rope and not push it. To us, this is common sense, but a robot doesn't come packaged with this knowledge or most any knowledge of the sort that we as humans consider to be obvious or self-evident.

"We are now doing experiments to see what happens when you move the instrument around to see whether the robot is able to rediscover a sound position," Lopez de Mantaras noted at the debate, according to Solana.

The idea of common sense knowledge can be generalized to a more technical concept known as transfer learning, which Lopez de Mantaras and a group of Brazilian computer scientists defined in a recent paper as "a paradigm of machine learning that reuses knowledge accumulated in a previous task to speed up the learning of a novel, but related, target task."

Said paper approached the problem of semantic learning—in which general relationships or structures are abstracted from many observations, such as in language learning—via an algorithm based on part on the idea of common sense. Here, common sense acts as a boundary of sorts that acts to reign in possible interpretations of phenomena and filter out random data. In our everyday lives we're bombarded with this irrelevant data, either sensory or as information encoded in language and symbols, which we discard without even thinking about it. If we didn't have this ability, we would be severely limited in our ability to interpret the world correctly or as others do. (It happens that this ability, or its deficit, is thought to be related to schizophrenia.)

This is a problem for computers, but Lopez de Mantaras seems to be onto a solution. Artificial intelligence may depend on it.