Image via Wikimedia Commons
Will a computer ever really mimic the human brain? Recently, we learned it takes 82,000 super-powerful processors to simulate just one percent of the brain for a single second—the brain is so large and complex, simulating the whole shebang is near impossible.
The futurists over at DARPA, who have long been chasing artificial intelligence, are after something slightly different. Rather than focusing on the mind's capacity and scale, they're interested in mimicking the cognitive thought process itself. They're working to develop a machine that can esstentially reason and problem-solve on the fly, without human intervention—"intelligent real-time computing," as they call it. In other words, a computer that can not just think, but think on its feet.
The research agency's new program to this effect will focus on mimicking the cerebral neocortex—the part of the brain that's crucial for things like memory, perception, awareness, and attention. DARPA recently put out a request for information for the research and development of this technology, which it's calling a "Cortical Processor." "Although not a neuroscience project per se, it will heavily depend on a variety of neural models derived from the computational neuroscience of neocortex," the agency writes.
To achieve that level of insight and reason, DARPA's looking to big data—and the Defense Department has plenty of it. The goal is to develop a machine that can understand and learn from a huge onslaught of data—including new information that's streaming in in real-time—from complex environments, like, say, a battlefield. By processing and analyzing all this information in a smart way, the maching could theoretically "decide" an appropriate action to take. DARPA describes it as "complex signal processing and data analysis."
The concept is based on the Hierarchical Temporal Memory method of machine learning, which is based on the memory-prediction theory, which more or less theorizes that cognitive function can be distilled down to a basic algorithm. The idea is similar to the "data eye in the sky" project underway by DARPA's sister agency, IARPA (Intelligence Advanced Research Projects Activity). Those researchers hope that big data patterns, once we can get a handle on them, will be able to help predict human behavior, by revealing for the first time the "sociological laws of human behavior," the way scientists can often predict environmental effects by understanding the basic laws of nature, the New York Times reported.
The thing is, getting a handle on gargantuan amounts of data is easier said than done. According to DARPA, current approaches to artificial intelligence—machine learning techniques like probability and statistics—are too limited and don't scale to large and complicated datasets. They also aren't capable of spacial intelligence or comprehending a sense of time—areas DARPA is particularly interested in exploring in its new program.
The neocortex of mammals is quite proficient at both of those things: It successfully adapts to changing environments and "routinely solves extraordinarily difficult recognition problems in real-time," as DARPA puts it. And while we don't have a super thorough understanding of how the neocortex works at this point, scientists have identified some basic algorithmic principles that can be combined with machine learning to develop the new technology, the agency writes.
A slate of DARPA projects already in the works are also inching closer to building robots of war that mimics the brains of animals. Now the agency is taking the audacious goal a step further. In the future, these intelligent machines could not just autonomously think, remember, and reason like humans (or at least dogs and monkeys), but even do so in real-time, based on the current surroundings and new information. That would certainly be a powerful—if unwieldy—tool in the Defense Department's pocket.
H/T Network World