The CIA Used Artificial Intelligence to Interrogate Its Own Agents in the 80s

Newly declassified documents show that the spy agency had grand plans for computerized interrogation.

|
Sep 22 2014, 9:05pm

Image: Autpilot/Wikimedia Commons

The CIA has notoriously been, well, "innovative" in developing new interrogation techniques (if you consider waterboarding an innovation, at least). Newly declassified documents reveal that willingness to experiment is nothing new: 30 years ago, the spy agency pitted one of its own agents against an artificial intelligence interrogator.

The documents in question, written in 1983 and titled "Interrogation of an Alleged CIA Agent," describe a series of experimental tests conducted in the early 1980s in which the CIA repeatedly interrogated its own agent, referred to in the report as Joe Hardesty, using a primitive AI called Analiza.

The declassified document is both fascinating in its distinct retrofuture flavour and eerily prescient nature, because the US government is now set to use virtual entities to conduct national security interviews. It's also kind of hilarious, because the interrogation transcript reads like a conversation with a really frustrating chatbot. Here's a slice:

AI: You had better straighten out your attitude, Joe, or it will be a long time before you ever leave this country.

Hardesty: What do you mean by that crack?

AI: You know what I mean.

Hardesty: No, I do not.

AI: Think about it.

Hardesty: Why?

AI: Why do you want to know why?

Hardesty: You made a threat and I want an explanation.

AI: I think the message is clear.

Hardesty: What message?

AI: Do you work for the American Central Intelligence Agency?

Hardesty: Hell, no.

AI: Why do you persist in lying to me?

Hardesty: I am not lying. You have no right to treat me like this.

The intelligence on display in the transcript is clearly undeveloped, and seems to contain a mixed bag of predetermined threats made to goad interrogation subjects into spilling their secrets as well as open-ended lines of questioning. 

According to the document, Analiza consisted, in part, of a crude machine learning algorithm that stored Hardesty's responses in its memory, along with a pre-set question bank that it could draw from.

"Other aspects of the program are probing Joe's vulnerabilities," the document stated. "AI records 'focus variables,' Joe's tendency to concentrate on various subjects, and 'profile variables' to serve as indicators of Joe's hostility, inquisitiveness, talkativeness, and understandability, and to pose questions about these."

When your captor is a machine, there is no humaneness to be found, and, hence, no one to plead with

Even way back then, the authors had a striking vision for future virtual entities that can learn on their own, adapt, and think abstractly. According to the document, the CIA believed it was possible that computers could "adapt," "pursue goals," "modify themselves or other computers," and "think abstractly." 

Potential applications for computer algorithms like Analiza could include training recruits before they head into the field and face the risk of an interrogation with a human opponent, according to the document.

The CIA, like the field of artificial intelligence itself, has come a long way since the 1980s, and algorithms that attempt to mimic brain processes (referred to as Advanced Neural Networks) like those being developed by Google have achieved many of the goals the CIA set decades ago. The agency itself is heavily invested in AI development today by way of its venture firm, In-Q-Tel, which recently gave a funding boost to Narrative Science, a company developing AI that can glean insight from data and turn it into a semi-readable news article.

"Enhanced interrogation techniques" may very well take on a new, unsettling meaning if the CIA's technological fever dream of the 80s ever comes to fruition. AI interrogation, while presumably less violent and repugnant than waterboarding, for example, could present its own set of moral transgressions.

When your captor is a machine, there is no humaneness to be found, and, hence, no one to plead with. When even that small avenue of humanity is done away with in the proceedings of state-sponsored barbarism, what is left? Illegal detainments could continue with only slight human involvement.

Even though decades worth of development have passed since the CIA's initial dabbling with AI interrogation techniques, virtual entities that can converse naturally with humans are still far off. 

The recent case of chatbot Eugene Goostman, which passed the Turing Test through trickery rather than genuine intelligence, demonstrated this. Even so, with government agencies like the CIA, DARPA, and powerful corporations like Google on the case, the possibility might be closer than we think.