FYI.

This story is over 5 years old.

Tech

Google Just Bet $500 Million That AI Can Fix Search

Before it builds a conscious robot army or turns Glass into your cybernetic friend, Google's going to use AI startup Deep Mind to inch closer to semantic search.
Image via flickr/superbond1

It's not really accurate to call Google a "search company" anymore. After all, today's news that the tech behemoth is spending some half a billion dollars to buy the artificial intelligence startup Deep Mind got speculation swirling that Google's planning to give its new robot army consciousness or is angling to make Glass into a personal assistant that can read your mind.

But Google is a search company, and search sucks these days. So Google's leveraging AI—specifically deep learning—to fix it. Whatever the company's super sci-fi futuristic long-term vision is for Deep Mind and its AI research, it's also going to use the technology to inch closer to truly semantic search.

Advertisement

I've been waiting for this so-called Web 3.0 era of semantic search for, what, about eight years now? It's going to be real handy when we can ask any question to our computer/smartphone/watch/brain-implanted chip and the thinking machine will fully understand the answer we're after. Google of course has already made the first step toward that new era with its Knowledge Graph, launched in 2012. It’s building up a massive store of information and mapping the relationships between those facts and put search queries into context and create a machine with common sense and understanding. Google's VP of engineering, John Giannandrea, described the Knowledge Graph vision in an interview with MIT Technology Review today:

As a general theme we’re trying to move beyond just searching to actually knowing about things. We think this is essential because we want to understand what you’re trying to do and give you some help … One of the main areas is to try and understand at a slightly higher level what text is about. Words that you see in a text are fundamentally ambiguous [to a computer] but if you have Knowledge Graph and can understand how the words are related to each other, then you can disambiguate them. If you see a document that talks about George Bush, Saddam Hussein, and Norman Schwarzkopf, you might be able to guess which Bush it is because only one of them had Norman Schwarzkopf there. That’s like a baby step towards actually understanding what this document is about.

Advertisement

The graph is populated with millions of objects and billions of connections between them, and is always growing. But there's a shitload of information in the world—more than you could program into a computer in a lifetime. To achieve human-level intelligence machines have to learn to teach themselves. And to understand semantic queries and deliver more relevant information, they have to learn to think the humans do. Which is why, as Sergey Brin and Larry Page said in a 2008 TED Talk on the subject, "The ultimate search engine is artificially intelligent."

Enter deep learning, the new approach to AI that teaches machines to think by creating neural networks that mimic the human brain. As it happens, this is what the folks at Deep Mind are good at doing. At this point we don't know much about the secretive startup except that it built a neural network that let a computer teach itself to play an Atari game all on its own. But we know the team will work with the top artificial intelligence talent over in Mountain View, headed up by futurist and chief engineer Ray Kurzweil, who’s overseeing the effort to expand the Knowledge Graph by building out the "Google Brain."

Kurzweil has some crazy plans for the company’s thinking computers, not least of which is achieving the technological singularity—but let's focus on search for now. In an interview last month with the UK's Sunday Times, he predicted that by 2029 they will have created a machine that understands language and human emotion, and well before that, by 2018, the Google Brain will have revolutionized search as we know it.

Advertisement

“Right now, search is based mostly on looking for key words. What I’m working on is creating a search engine that understands the meaning of these billion of documents,” he said. “It will be more like a human assistant that you can talk things over with, that you can express complicated, even personal concerns to.” In other words, PageRank and keywords would be replaced by natural language and voice recognition to make Google your always-on "cybernetic friend."

In its most high-profile success yet, the neural network made headlines when a computer successfully recognized images of cats on YouTube without ever being told what the animal was—essentially, it invented the concept of a cat on its own.

Many AI projects are also working on cognitive computing, and image recognition—in academia and the private sector. The major players in Silicon Valley are scrambling to scoop up top talent in the hot field, in no small part because the more these companies' stock up on computers that understand how users think and what users want, the more they can target the hell out of ads. Google just snatched Deep Mind away from Facebook, which was also in talks with the startup—a big win for the search company, whose thick wallet gives it a clear edge in this particular race.

Here’s another edge: Since building neural nets that mimics the brain require a massive amount of computing power, the company with the biggest, faster supercomputers are going to be ahead of the curve. The more data you feed the model, the faster the machine learns. To that end, Google's studying how quantum computing can improve its web search feature. The Quantum Artificial Intelligence Lab is hosted by NASA, which uses the lighting-fast D-wave computer for its own research, too. Powering machine learning with quantum computing would be an astronomical leap forward for the semantic web.

But while it would be delightfully convenient to have the answers to all your questions an intuitive sentence away, better search is just the tip of the artifical intelligence iceberg. The promise of Web 3.0, “the web of meaning,” could transform how we analyze and glean insights from the nascent treasure trove of information at our fingertips. Once computers that can process information the way that the human mind does, it could unlock actual meaning in the clusterfuck of Big Data that's streaming in from the digiverse.

@meghanneal