FYI.

This story is over 5 years old.

Tech

MIT Engineers Beat the Complexity of Deep Parallel Processing

The future of parallel processing?
​Image: Karl-Ludwig Poggemann/Flickr

Engineers at MIT have developed an algorithm that promises to boost the speeds of multicore systems by nearly 50 percent—while upping their overall power efficiency by some 30 percent in the process. It doesn't entirely nuke the lingering parallel processing memory bottleneck known as the cache hierarchy, but the new scheme is poised to at least widen it considerably. Computers can get faster after all. Phew.

Advertisement

The MIT researchers, led by electrical engineering professor Daniel Sanchez, ​presented its work at this month's Electrical and Electronics Engineers' International Symposium on High-Performance Computer Architecture, where they were nominated for the conference's best-paper award. As Sanchez and his group write in that paper, the scheme, "uses a combination of hardware and software techniques to achieve performance close to idealized schemes at low overheads."

The problem here is a deep one. Computer processors have reached their absolute limits, achieving scales so small that hardware begins to interfere with itself, in a sense. Noise and unpredictability dominate and this sets a more or less fundamental lower limit on things. Multithreading, where a bunch of processors are stacked together, seems like a natural solution, but it comes with its own set of fundamental difficulties. Turns out that doubling the number of chips in a computer doesn't double its performance, and adding more cores to a system delivers diminishing returns.

The cache hierarchy problem has to do with how physical data is shuttled around the different cores of a multicore system. The key to faster parallel processing is better and more efficient sharing of tasks, which imposes a deep, new complexity on how data is distributed around a given architecture. It seems natural to store data as close as possible to where that data is going to be processed, but when sharing is involved, this quickly becomes a very, very difficult criteria to manage.

Advertisement

The group's allocation scheme visualized. Image: MIT

"For systems to scale efficiently, data must be close to the computation that uses it. This requires keeping cached data in banks close to threads (to minimize on-chip traffic),while judiciously allocating cache capacity among threads (to minimize cache misses)," Sanchez and his group write in the new paper. "We find that to achieve good performance, the system must both manage cache capacity well and schedule threads to limit capacity contention. We call this computation and data co-scheduling. This is a complex, multi-dimensional optimization problem."

The fundamental problem is old. It's known as "place and route" and the basic idea is of finding the best way to minimize the physical distances between related computations on a given chip (or set of chips). The problem is NP-hard, actually, which is a designation applied to only the most difficult and unsolvable computing problems. It's just too complex; if a solution exists, there wouldn't be enough time and computing power in the entire universe to find it. So, engineers fudge approximate solutions.

These approximate solutions do pretty well, it turns out. In a 64-core system, they're able to come up with good solutions in a matter of hours. The catch is that hours for a computer is full-on millenia, where 50 million operations might be processed in the span of milliseconds. Sanchez's new algorithm offers this same capability in 25 millisecond calculations, allowing an architecture to remake itself on the fly.

Sanchez explains: "What we do is we first place the data roughly. You spread the data around in such a way that you don't have a lot of [memory] banks overcommitted or all the data in a region of the chip. Then you figure out how to place the [computational] threads so that they're close to the data, and then you refine the placement of the data given the placement of the threads. By doing that three-step solution, you disentangle the problem." So: distributing data and distributing computations.

This scheme involves the introduction of a new on-board scheduler or monitor, which would take up about 1 percent of a given architecture's real estate. Seems like a small price to pay for the future of fast computing.