FYI.

This story is over 5 years old.

Tech

Are We Headed for 'Peak Code'? Not Likely

A group of economists offers a cynical take on the programming future.
Image: by the author​

​The problem with code is that it's too good. This isn't to say that it solves its respective problems in the best and most efficient ways possible, or that it's guaranteed to achieve modular perfection. It's just that code lasts. The guiding principle of computer engineering is backwards compatibility and, for programming, this translates roughly into reusability. Quality or even just well-functioning code should never be rewritten.

Advertisement

The logical result of this is what Rough Type's Nicholas Carr offers as ​"peak code." It's detailed in ​a recent paper by a trio of Boston University economists working with Earth Institute founder Jeffrey Sachs, who modeled a theorized programming future in which programmers had made themselves redundant by being too good at, well, programming. The best code by the best programmer doesn't need to be rewritten, and so that task is eliminated from the marketplace—along with the programmer.

"Our simulated economy is bare bones," Sachs and co. write. "It features two types of workers consuming two goods for two periods. Yet it admits a large range of dynamic outcomes, some of which are quite unpleasant." This is the result of "code accumulation," in which past programmers wind up competing with current and future programmers. It's easy enough to imagine them as the same people.

These robots contain the stuff of humans—accumulated brain and saving power.

The result, the economists explain, is a bust—or, at best, a boom-bust cycle—to occur at some point in the not-terribly-distant future. "The combination of code and capital that produce goods constitutes, in effect, smart machines, aka robots," Sachs and his group write. "And these robots contain the stuff of humans—accumulated brain and saving power. Take Junior—the reigning World Computer Chess Champion. Junior can beat every current and, possibly, every future human on the planet. Consequently, his old code has largely put new chess programmers out of business."

Advertisement

As the authors explain, the situation only grows worse the less proprietary software becomes, e.g. open-source is part of the problem. Which all adds up to something cynical as hell, but with some amount of truth to it.

That said, the counterargument is found in Sachs' own chess example. Chess stays the same. The rules are the same, always. Programming chess bots is a highly idealized situation, in which the resulting software exists in a static environment. A chess game will be a chess game, tomorrow or a hundred years from now.

Real-world code seldom occurs in static environments, however. And this is why ​software rot exists. The environment in which code operates is dynamic and will continue to be so. The result of this is a sort of decay. Code becomes less and less well-suited for emerging conditions and it develops bugs and deep inefficiencies.

As rot advances, static software becomes error-prone and, indeed, ambles toward obsolescence. We can't program around every future. And in ideal worlds, we might extend Sachs argument to anything at all: bridges, recipes, art, architecture. What's to say that the situation is so different from that of an architect or civil engineer? Is there some similar anxiety in those fields? What if I make this bridge too good?

Is there something about code that sets it apart from the larger world of "making things?" If there is, it could only matter in an unchanging world. Mathematical logic is surely more resilient than cement and aesthetics, but resilience only goes so far in a dynamic world. The ground moves and concrete cracks. So it is with software.