FYI.

This story is over 5 years old.

Tech

The Problem With Using Metadata to Justify Drone Strikes

When marking someone for death is like feeling around in the dark.
Image: US Air Force

The US military maintains that its drone program delivers deadly "targeted strikes" against its enemies overseas, and yet, reports of civilians being killed by drones keep pouring in.

Secret documents prepared as part of a Pentagon report on the US drone program in Yemen and Somalia, obtained by The Intercept, reveal the reason for this apparent contradiction: The US military is over-reliant on signals intelligence, or SIGINT—such as cell phone records, or metadata, of who is called and when, as well as the content of phone and online communications—when selecting targets for drone strikes.

Advertisement

This kind of intelligence is often supplied by foreign governments, is difficult to confirm on the ground in Yemen and Somalia, and is easily gamed by adversaries, the Intercept report on the documents alleges. Basically, it's unreliable until a human confirms it. But in Yemen and Somalia, signals intelligence makes up more than half of the intel that goes into marking someone for death, the documents state.

"It's stunning the number of instances when selectors are misattributed to certain people," the source who leaked the documents told The Intercept, "And it isn't until several months or years later that you all of a sudden realize that the entire time you thought you were going after this really hot target, you wind up realizing it was his mother's phone the whole time."

Two parties had both analyzed signals intelligence and thought they had their man—they produced photos of two completely different people

Metadata such as phone records and ambiguous online chatter can only tell analysts so much about a potential target until someone confirms it; until that point, it's an exercise in interpretation. As a result, an apt metaphor for targeting someone for aerial assassination is touching a strange animal in a dark room, Christine Fair, associate professor of security studies at Georgetown University, told me over the phone.

Per Fair's metaphor, numerous people touch different parts of the animal—one person might feel leathery skin, another a horn—and an animal expert ties all these points of information together to make a conclusion: it must be a rhinoceros. But, until a person turns on the lights, you'd never know that it was actually a triceratops. Hopefully, nobody killed the animal before someone flipped the lights.

Advertisement

"When you have multiple means of intelligence, you can catch mistakes," Fair said. "Signals intelligence alone is not quality intelligence. Whether you are using that for a sniper, whether you're using that for a drone, or whether it's a conventional air strike—it doesn't matter how you're eliminating that person."

Watch more from Motherboard: America's Ex-Drone Pilot

Fair's metaphor for intelligence gathering is unsettlingly close to what happens in the real world. In one instance, Fair said, she witnessed a meeting wherein someone had been targeted for a lethal operation. The two parties she was with—she would not say whom—had both analyzed signals intelligence and thought they had their man. They produced photos of two completely different people, acquired through human intelligence.

"This happens all the time," Fair said. "The good news is that that was caught in a joint meeting, but had someone else not had a countervailing picture of the dude, who knows who would have been killed?"

The ineffectiveness of metadata in investigating potential targets has been well-documented in a domestic context. A declassified 2009 report on the NSA's phone metadata collection program, Stellarwind, concluded that between 2000 and 2004, just 1.2 percent of the tips that arose from the surveillance contributed to identifying a terror suspect. According to Fair, the issue with ineffective domestic spying is "the exact same" as that with drone killings based on signals intelligence.

Advertisement

"Mistakes happen all the time even with precise weapons"

The idea of a more precise and sanitary kind of war led by intelligence and automated weapons is the result of a misunderstanding about how well such technologies actually work, war futurist and New America Foundation strategist Peter Singer wrote me in an email.

"Mistakes happen all the time even with precise weapons, with the causes ranging from tech failure to bad information on the target's identity, to maybe it wasn't a mistake in the first place," Singer wrote. "And second, we've changed our level of our expectations. Actions like the firebombing of Tokyo were accepted back in 1940s, while single digit casualty counts are viewed by many as unacceptable today, rightly so according to most ethicists."

But Somalia and Yemen are outliers in the drone war, Fair said. In Pakistan, there are many more "boots on the ground" available to confirm signals intelligence, she told me, and as a result fewer civilians die. However, human rights organizations contest that civilians are still regularly killed in drone strikes in Pakistan, sometimes in much higher numbers than "legitimate" targets.

The solution proposed by the Pentagon in the secret report is to beef up the number of aircraft flying surveillance missions. Fair, however, suggested that more humans in the targeting loop are needed to confirm what the machines pick up.

Either way, it's clear that even with reams of data flowing through government servers and being picked apart by experts, the drone war is anything but precise.