FYI.

This story is over 5 years old.

Tech

In The Future, We'll Leave Software Bug Hunting to the Machines

In the future, having AI that can help humans find bugs won’t just be practical, but a necessity.

In the fictional cyberpunk card game Netrunner, there is Ice, and there is AI. Ice protects computer servers from attack, while the AI's job is to break in. For the most part, this happens on its own, pitting computer against computer. It is an autonomous, intelligent arms race—one that, in our world, does not, yet, exist.

But for over the past two years, a handful of researchers, hackers, and reverse engineers have been working to bring this reality to life. Or, at least, lay its foundations.

Advertisement

"Being able to protect the entire cyber attack surface that is present in our lives, and doing it entirely manually, is a herculean task," says David Melski, the vice president of research at computer security software developer GrammaTech.

His company, along with professors from the University of Virginia, is one of seven finalists in DARPA's Cyber Grand Challenge, a digital capture-the-flag competition sponsored by the US military's defence research arm that will take place in August at DEF CON, the infamous Las Vegas hacker convention.

Each team has been tasked with designing a tool that can tear apart software, look for vulnerabilities, and then secure them against attack—while at the same time using the vulnerabilities they discover to attack the opposing teams.

Essentially, DARPA is asking that teams design bug-finding apps that can hack other teams' computers, while preventing other teams from hacking them back. The grand prize is $2 million.

It's not hard to see why DARPA is interested in such tools. Software runs everything now, from the power grid to lightbulbs, and is an increasingly attractive target for attack. But no software is bug-free, and more software means more bugs. In the future, having AI that can complement the work humans already do at finding bugs—both to patch them and exploit them—won't just be practical, but a necessity.

It's important to note that the challenge environment has been carefully constructed. You couldn't just drop this code into the real world and expect it to work, not yet. That's because, "in its full scope, this vision is daunting," wrote Jörg Hoffmann, a professor at Saarland University's Foundations of Artificial Intelligence Group, in a paper last year for the Association for the Advancement of Artificial Intelligence on the challenges facing intelligent penetration testing. "Even for purely technical attacks, realistically simulating a hacker arguably is 'AIcomplete.'"

Advertisement

"We fully believe that it's going to be years before a computer can replace a human, because humans, especially when you look at computer security, have this spark of creativity."

In other words, Hoffmann argued, achieving the far-off vision that DARPA's Cyber Grand Challenge ultimately imagines—computers that can find and patch bugs, and then use those bugs to launch cyberattacks, all on their own—would first require a truly intelligent machine.

Of course, the question of what constitutes "real" artificial intelligence has long been up for debate, and it may very well be the case that we don't quite need a firewall that can think to do a competent job in the near term. "I believe there's this old joke that it's only called artificial intelligence until the problem has been solved, and then it's automation," Melski said. "You have to start answering the question of what counts as intelligence, and that's tricky."

Case in point, the programs these teams are creating can already rival—and in some cases, exceed—what humans can do. But in terms of creativity and cunning, it will be quite some time before they can actually "think" about cybersecurity problems in a human way. With that in mind, imagine DARPA's Cyber Grand Challenge as a stake in the ground marking where that work starts.

At the University of Idaho's Center for Secure and Dependable Systems, director Jim Alves-Foss and research assistant professor Jia Song have set their sights, in the near-term, on a much more attainable task. "My goal is to make tools and methodologies available to system developers so it's cheaper and easier to build secure code," Alves-Foss said. They have the smallest Grand Challenge team—it's only the two of them—and they are self-funded.

Advertisement

According to Alves-Foss, bugs that researchers have known about for decades are still cropping up in newly written code—the sort of low-hanging fruit for attackers that he believes "can be fixed automatically," and don't necessarily require AI to discover (his team has opted for a mix of heuristics and algorithms to quash bugs).

Rather, where AI has come in handy thus far is deciding where to focus that analysis, something that Melski's team has been doing at GrammaTech and UVA. According to Melski, one challenge when having a computer search for vulnerabilities is how to optimize that search.

"While DARPA is providing a lot of compute resources, they're still finite. And so you still have to make decisions about how to apply those compute resource," Melski explained. It's here that the team has used aspects of AI research, creating an intelligent taskmaster that can determine how much effort to expend on searching each program—both where bugs are mostly likely to be found, and the severity of those bugs.

Their goal is to harness the "cold hard logic of computers" to take vulnerabilities their team has already found, and find them in other programs

For now, it's less about having an AI discover and invent completely new types of attacks, and more about the logistics of divvying up limited computing resources to prioritize the search for known bugs and ones like them.

"We fully believe that it's going to be years before a computer can replace a human, because humans, especially when you look at computer security, have this spark of creativity," said David Brumley, a co-founder of ForAllSecure, another security software developer and finalist in DARPA's challenge.

Advertisement

Rather, the way these teams see their work at this stage is not so much in service of that grand cyberpunk dream of intelligent computers fighting it out on a neon-tinged abstraction of cyberspace, but as a means of augmenting the work humans are already doing.

For example, Alves-Foss wants to take the work that he and Song have done back to the classroom, where it can be used as a toolset to help people write better code. The idea is that, by having a tool that can identify insecure or vulnerable code as it's being written—but before it's shipped to users—we can reduce the number of software vulnerabilities going forward.

Similarly, both GrammaTech and ForAllSecure said that their work could easily be integrated back into their respective commercial products, which take different approaches to evaluating the security of popular programs. In the near term, they suggest suggested that these techniques could be deployed as a consumer or business product, built to scan apps downloaded from the internet for bugs and potential vulnerabilities.

At ForAllSecure, Brumley says that their company's goal is to harness the "cold hard logic of computers" to take vulnerabilities their team has already found, and find them in other programs—something that would take "an army" of humans to do otherwise, added co-founder Thanassis Avgerinos.

In each of these cases, the programs being written aren't exactly intelligent in the human sense, but they are doing things that either humans can't do, or would take a ridiculous amount of time and effort. It may not be intelligence in the sci-fi sense, but depending on how you characterize intelligence, it's certainly a start.