In the Era of ‘Prosthetic Intelligence,’ the Right to Remain Silent Is the Right To Encryption

Self-incrimination has taken on a new, dangerous meaning, according to researchers.

Michael Byrne

Michael Byrne

You have the right to remain silent. It's a key principle of the United States criminal justice system and many others that as the accused we can choose to not say anything―to not reveal anything―under questioning by the police or at trial. Generally, it's a right against self-incrimination. We can choose not to expose ourselves, to not disclose our thoughts and memories, and that refusal can't be used against us.

This seems pretty straightforward. It only takes seven words to encapsulate, anyhow. But technology is changing everything, including what and where the "self" even is. This is the starting point for an argument put forth by Silicon Genetics founder Andrew Conway and Electronic Frontier Foundation Chief Computer Scientist Peter Eckersley in a piece published this month in the Communications of the ACM. Simply, technology has enabled―if not forced―us to project our most private interior selves into places that can be observed and recorded with few legal limits.

That is, without even realizing it, we have become reliant on what Eckersley and Conway call "prosthetic intelligence." Our digital machines are not mere tools, but extensions of the most basic processes of human cognition, the suite of mental processes encompassing thoughts, experiences, sensations, and memories.

"As hunters, weapons were prosthetic claws," the pair write. "As gatherers, baskets were prosthetic arms. After the development of agriculture, horses and plows were huge prosthetic muscles. Later the industrial revolution made us physically strong to a level unimaginable beforehand. And looking back, the invention of writing was the first step on the road to a modern existence built on prosthetic intelligence, one where the states we share through the Internet and the financial system are becoming more important than the biological and physical environment around us."

And prosthetic intelligence is necessarily a much more public form of intelligence. It is uniquely observable. Cops of the future won't have to read minds because we will have done the work ourselves in making our minds readable via technology.

"You can think faster and more accurately, but your electronic devices know where you are," Eckersley and Conway lament, "where you have been, who you have talked to, what you said, what your heart rate was at the time, what you have looked at on the Web, what medication you are taking, what you have bought, what maps you have looked up, what spelling mistakes you make, and it is only accelerating."

At this point, you can fill in your own dystopia. Suggested seeds include precrime, thoughtcrime, impersonation, algorithmic errors and bugs, and bots that can impersonate not just chat patterns but entire corpus of human prosthetic intelligence.

Eckersley and Conway's main concern is with the nature of self-incrimination, which could fit into any number of dystopian futures. Simply, prosthetic intelligence, this part of our thinking minds that we have farmed out to technology, has no such guarantee of the right to remain silent. If the FBI is allowed to have access to encrypted information via manufactured backdoors, then our rights are in grave danger, according to the duo.

It's a kind of clever case for the right to effective cryptography―that we should be able to protect our digital information because it is a vessel for our literal thoughts. And those literal thoughts are protected by those seven words.

"That wonderful gadget in your pocket is not a phone," the paper concludes. "It is a prosthetic part of your mind—which happens to also be able to make telephone calls. We need to think of it as such, and ask again which parts of our thoughts should be categorically shielded against prying by the state."