FYI.

This story is over 5 years old.

Tech

The Simple, Elegant Algorithm That Makes Google Maps Possible

Edsger W. Dijkstra’s short solution to a bottomless complexity.
​Image: Andreas F. Borchert/Wiki

​ Algorithms are a science of cleverness. A natural manifestation of logical reasoning—​mathematical induction, in particular—a good algorithm is like a fleeting, damning snapshot into the very soul of a problem. A jungle of properties and relationships becomes a simple recurrence relation, a single-line recursive step producing boundless chaos and complexity. And to see through deep complexity, it takes cleverness.

Advertisement

It was the programming pioneer Edsger W. Dijkstra that really figured this out, and his namesake algorithm remains one of the cleverest things in computer science. A relentless advocate of simplicity and elegance in mathematics, he more or less believed that every complicated problem had an accessible ground floor, a way in, and math was a tool to find it and exploit it.

In 1956, Dijkstra was working on the ARMAC, a parallel computing machine based at the Netherlands' Mathematical Center. It was a successor to the ARRA and ARRA II machines, which had been essentially the country's first computers. His job was programming the thing, and once ARMAC was ready for its first public unveiling—after two years of concerted effort—Dijkstra needed a problem to solve.

"For a demonstration for noncomputing people you have to have a problem statement that non-mathematicians can understand," Dijkstra recalled in an interview not long before his 2002 death. "They even have to understand the answer. So I designed a program that would find the shortest route between two cities in the Netherlands, using a somewhat reduced road-map of the Netherlands, on which I had selected 64 cities."

"What's the shortest way to travel from Rotterdam to Groningen?," Dijkstra said. "It is the algorithm for the shortest path, which I designed in about 20 minutes."

Google Maps does this for us now and we don't even really think about what a complex task it could be. Shortest path problems, a key feature of graph theory with a whole lot of pretty obvious real-world applications, get insanely deep very fast. The result is known (informally) as a combinatorial explosion, which is where the number of different combinations that must be explored in a given problem grows exponentially.

Advertisement

The result of such an explosion is that problems, like shortest path problems, grow so quickly as to become practically incomputable, taking a practically infinite amount of time to solve. It only takes a handful of nodes in a given map or graph for the number of possible combinations to push into the billions, requiring vast and unreasonable amounts of time.

The easiest way to explain Dijkstra's algorithm is probably with an example. Take the graph below, where the numbers given are the weights of each connection. (A weight could be a simple distance or really any relative cost associated with traversing a particular connection that we're seeking to minimize.) To start, we assign s, our starting position, the value 0. It takes 0 miles (or whatever) to get here because it's the beginning position. Next, we look at the neighboring nodes of s, which we can imagine as a sort of frontier to be explored.

route.jpg

In the first iteration, we look to the closest node, which is 1 unit away. We assign a label to the node with that value, a, and look onward at the next frontier nodes and their respective distances. b is 1 away (2 from the beginning), c is 2 away (3 from the beginning), and we also have d, which is 2 from the beginning. Since we're after the shortest path from the beginning, we're forced to move to d from s (2 units), and we assign a value of 2 to d. On the next iteration of the algorithm, from d we look ahead to c, which is 10 away (12 from s), but we also look again from our outpost at a, where we can still get to c in 2 (3 from the beginning) and b in 1 (2 from the beginning). We set up our next outpost at b and assign it a label of 2 (2 moves from the start).

Advertisement

Our explorer stationed at b is in for a disappointment. The only possible move to t is 10 units away (12 from the beginning). And this is more than the 2 units from a to c (3 from the beginning) and the same as a trip from s to c through d, a possibility we can now safely discard (having arrived at c in only 3 units, rather than the 12 required via d). Now, we're at c and if this seems complicated it's really not. We're just making cautious, tentative steps from node to node, while being forced by the algorithm to always consider the shortest path from the start.

Finally, we again look from b to t, again noting the total path as being 12. Meanwhile, the final jump from c costs 1, for a total shortest path distance of 4. And so an incredibly complex—explosively complex—problem can be accomplished elegantly, simply, and even intuitively on paper.

More examples:

Dijkstra_Animation.gif

Dijkstra says he came up with this while sipping a coffee on some cafe terrace. "In fact, it was published in 1959, three years later," he recalled. "The publication is still quite nice. One of the reasons that it is so nice was that I designed it without pencil and paper. Without pencil and paper you are almost forced to avoid all avoidable complexities. Eventually that algorithm became, to my great amazement, one of the cornerstones of my fame."

This is what makes Google Maps go 'round, or at least some variation of it is. Really, it's what makes such route-finding possible at all: just enough cleverness to see through the noise.