FYI.

This story is over 5 years old.

Tech

The Thing Holding Back Internet Innovation Is the Internet Itself

But with new platforms for network experimentation, help is on the way.
Image: Donna Cox and Robert Patterson, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign

There is internet innovation in the sense of cool new web apps and social media platforms and streaming music services (we definitely need more streaming music services), and there is the very wide-angle innovation that changes what it even means to be online: distributed computing, the cloud, networking. That is, the technology that is the internet itself. And, ultimately, this rules everything else, from hackathon-produced apps to global cybersecurity.

Advertisement

There's a problem, however, according to a recent review study in the Communications of the ACM: The standardization that's made the internet what it is also carries with it "the risks of reducing variability and slowing the pace of progress," the authors, led by Mark Berman of the National Science Foundation's Global Environment for Network Innovations project (GENI), write. The internet turns out to be a really shitty place for experimentation.

"Validation and deployment of potential innovations by researchers in networking, distributed computing, and cloud computing are often hampered by Internet ossification, the inertia associated with the accumulated mass of hardware, software, and protocols that constitute the global, public Internet," Berman and his group continue. "Researchers simply cannot develop, test, and deploy certain classes of important innovations into the Internet."

"In the best case," they argue, "the experimental components and traffic would be ignored; in the worst case, they could disrupt the correct behavior of the Internet."

As an example, Berman and co. cite the adoption of IPv6, the most recent version of IP protocol—the traffic-routing and identification/location system for networked computers—created by the Internet Engineering Task Force and released in 2011. For the relatively "modest" changes offered by IPv6 to be put into place, it took nearly a decade of slow and steady work; as of 2013, only about 4 percent of domain names had support for the new protocol.

Advertisement

That it took this long is simply a feature of the fact that IP protocol impacts networked components at every level—it's like trying to excavate and replace the foundation of a skyscraper. Researchers looking to investigate entirely new non-IP protocols or routing technologies have it even harder. The only opening is often using simulations.

Unfortunately, internet simulations don't do a very good job of simulating the internet, even at their best. They aren't very realistic, Berman and his group note.

The answer may be in what are known as future internet and distributed computing (FIDC) testbeds, one of which is Berman's GENI project, but there are several others. In an IEEE paper published last fall, he explained the idea as a combination of "slicing"—or virtualized end-to-end configurations of computing, networking, and storage resources—and "deep programmability," which is where the experimenter can program devices within the virtualized network at every possible level. Crucially, these two features are realized not through simulations, but through actual physical computers.

"It is the property of deep programmability that creates the key opportunities for innovation in a FIDC testbed," the current paper explains. "In a deeply programmable environment, the experimenter controls the behavior of computing, storage, routing, and forwarding components deep inside the network, not just at or near the network edge."

Advertisement

Said experimenter is afforded exclusive power of many shared resources, which are usually presented in the form of a networked collection of IRL general-purpose computers. This collection is arranged in whatever experimental topology is desired and various special purpose components can be added in as well, such as sensors, high-performance computing resources, and-or cyberphysical systems. Because the experiment is based on actual computers and components, rather than simulations, researchers can offer "end user opt-in," e.g. test their work out on real users.

Berman's paper looks at FIDC systems through the lens of four test-cases. Each one could be its own post really: cloud-based personalized weather "nowcasting" service CloudCast, future-internet architecture MobilityFirst, customization-friendly routing technology NetServ, and OpenFlow networks, one example of an emerging technology called software-defined networking (SDN).

All four are technologies that fundamentally rewire the internet itself in some way or another yet don't lend themselves very well to simulation. This is where FIDC is useful. For example, a powerful method of testing OpenFlow technology is in using it to route video streams and then evaluating the relative quality of the resulting video. "Performing such experiments in simulation is very difficult," Berman and co. write, "requiring simulation not only of network functions, but also the software stack of the video codec, the rendering pipeline, and post-processing functions of the video client."

Programmable, ultra-high-speed networks enabled by things like OpenFlow are probably the best example of what's to come and of why it's not coming a lot faster. SDN is a deeply fundamental rewrite of how networks function and, as such, it could be a complete disaster in impatient hands with access to too-large of scales. It's being slowly adopted as you read this—it's running Google's internal network and is being offered in some Microsoft server products—but its more widespread future may not be so far off.

"Although the technology is still in flux," Berman writes, "FIDC testbeds are already supporting important research and education initiatives. As these testbeds join together in international federations, their benefits increase combinatorially."