The security guru Bruce Schneier wrote the book on cryptography, runs one of the most-read blogs on security, and is helping Glenn Greenwald analyze the NSA documents released by Edward Snowden. I spoke to him for an hour one morning in October in a courtyard near the Berkman Center for Internet and Society at Harvard Law School, where he is a fellow this semester. For a new book, he's researching "that intersection of security, technology and people," looking at power structures, hacking, surveillance, privacy, and other topics that would be fascinating to anyone remotely interested in the future of the internet. He wore a healthy beard and his hair in a ponytail and a Hawaiian shirt beneath his blazer, as if to say that nothing, not even this budding Boston winter, could faze him.
Below is a version of our interview edited for length; to see an even more edited version on video, touching on the NSA and personal data, watch it here.
Motherboard: What's cryptography?
Cryptography is basically mathematical secrecy. Wait, that’s bad. This is hard. Cryptography is basically mathematical security. It’s encryption. It’s protecting my data as I send it to you. Or my data on my hard drive. It’s authentication, making sure that my data is what I said it was. A signature, doing that on computers. It’s integrity. It’s a lot of different things, but it’s basically keeping all of these security properties intact as we move from the real world to the digital world.
What is important about security?
Security is how society functions. There's a lot of people in society—and this is true whether you're living in small family groups in the East African highlands in 20,000 BC or 2013 New York City. You're dealing with a large group of people all of whom will cooperate with each other, will compete with each other, will deceive each other, will be truthful, and security is how that group stays together, how individuals in that group protect themselves, how the group protects themselves. Security is really fundamental to how society works.
Security at Denver International Airport (By Dan Paluska)
When you see a new system, you have to assume it's insecure.
What's our biggest misconception about security?
Probably the biggest problem with the public's perception of security is that things are secure as a default. We see this a lot in the voting industry. The voting machine companies will come up with an internet voting machine or electronic voting machine and the onus will be on the security company to prove that it's broken. It'll be assumed secure, and that's just nonsense. When you see a new system, you have to assume it's insecure, unless you can prove it's secure. The public perception is reversed. "I have a door lock, it's secure unless you show me you can break it." That's not right—it's insecure unless you can show me that it is secure.
The balance of power—computing power but also political power over communications networks—has shifted back and forth a lot since the early days of the internet. What's the connection between security and power online, and how are you examining it now?
The internet is interesting because it really changes so many things. When the internet was born, there was this belief that it would vastly change the power structure. There's a great quote from John Perry Barlow in the mid-'90s at the World Economic Forum, and he basically says the governments of the world have no business on the internet, that have no power over the internet. You can't legislate it. The internet is it's own thing. It's a really utopian way of looking at the world, but we believed it. We believed the internet would change the world, would give power to the powerless. And it did, for many years. The ability to organize, to coordinate—it made so many things different.
And that changed recently. Governments discovered the internet. So now we're seeing that in China, for example, the internet is a tool of social control, and now both sides are using the internet. The Syrian dissidents are using the internet to organize, the Syrian government uses the internet to round up dissidents. That interplay between the institutionally powerful—the governments and corporations—and the distributively powerful—dissident groups, criminals, and hackers. How they both use the internet to increase their power, how they use the internet against each other, I think is fascinating. It's something that we need to look at. In the coming years we're seeing a lot more power struggles play out on the internet. And I'm just curious how that's gonna end up—it's not at all obvious.
I'm curious about the role of the hacker in all of this. There are state-sponsored hackers in Maryland and Shanghai and there are criminal hackers in Prague and there are hackers like Jacob Appelbaum trying to defend privacy and liberty online. But, wait—what is hacking?
Hacking has become a maligned term recently. Because it's believed to be criminal. Traditionally a hacker is someone who figures out how something works, or makes it his own. "I "hacked together this prototype": that's the traditional definition. So hacker is really a term of art, a term of skill, not a term of morality. In the '90s "hacker" became criminal. Recently, we've been trying to take back the word hacker. You'll hear about "hackerspaces"—spaces people can go and do traditional hacking—you know, take apart technology, put it back together, see how it works. And hackers in that context have always been at the forefront of understanding technology, because they're the ones who are gonna do unauthorized things.
The two best things about Twitter, the hashtag and that response "@" sign, were not invented by Twitter. They were hacks. They were invented by the community. Hackers are always going to be doing that, figuring out how things work. And as they do that they're going to figure out how things fail. When you get to the security definition of a hacker, it's someone who breaks into stuff. Because how else do you figure out how it works? By breaking into stuff, you learn how to fix stuff. A lot of our best security comes from hackers breaking stuff, or hackers telling us how to fix stuff. There's a lot of back and forth, and hackers provide that tight feedback loop between technology and how people will use technology.
What are some ways that you've seen, in the past decade, hackers demonstrate very valuable exploits that have helped us become more secure? Are there particular examples that come to mind? Security flaws that hackers have shown we need to fix?
We see a lot of vulnerabilities in operating systems, from computers to phones, coming out of the hacker community. A lot of vulnerabilities in hotel key cards, there's a lot of insecurities in those key card door locks, and hackers have demonstrated a lot of them. Voting machines—those voting machines are very insecure. In order to get those fixed, we need information about how they can be hacked. We're getting that. There, you're seeing more hackers coming out of universities doing that because they have the access to the equipment. But it's the same thing whether you're doing it as a home person or you're doing it at a research institution. You're looking at a system, taking it apart, looking at how it fails, and putting it back together.
The demise of the Silk Road is a really good and powerful story, and illustrates something that Snowden said in his first interview: that the math works, that cryptography does work. It's just hard to use it right.
What about attacks that affect infrastructure? Obviously the past few years have shown that industry, cities, utilities, even vehicles are vulnerable to hacking. Are those serious threats?
There are threats to all embedded systems. We've seen groups mostly at universities hacking into medical devices, hacking into automobiles, into various security cameras, and demonstrating the vulnerabilities. There's not a lot of fixing at this time. The industries are still largely ignoring the problem, maybe very much like the computer industry did maybe twenty years ago, when they belittled the problem, pretended it wasn't there. But we'll get there.
When I look at the bigger embedded systems, the power grids, various infrastructure systems in cities, there are vulnerabilities. I worry about them a little less because they're so obscure. But I still think we need to start figuring out how to fix them, because I think there are a lot of hidden vulnerabilities in embedded systems.
Are there particular security concerns right now that you think the public, given its misunderstanding about security, doesn't appreciate enough?
I'm most worried about potential security vulnerabilities in the powerful institutions we're trusting with our data, with our security. I'm worried about companies like Google and Microsoft and Facebook. I'm worried about governments, the US and other governments. I'm worried about how they are using our data, how they're storing our data, and what happens to it. I'm less worried about the criminals. I think we've kinda got cyber-crime under control, it's not zero but it never will be. I'm much more worried about the powerful abusing us than the un-powerful abusing us.
What role does the public play in that particular dynamic? Everybody's got a smartphone, we're sharing data all the time, fairly freely. Do we bear a certain amount of responsibility for this particular power grab that you're describing?
Yes and no. The public, yes, we give our data willingly to Apple on our iPhone, to Facebook. but we don't really do it consciously. We're doing it incidentally. We're on Facebook because we want to talk to our friends, we don't log onto Facebook and say, "let's tell advertisers all about us today!" We don't pick up our phone and say "I'm going to let Apple surveil me all day." We want to get phone calls. So it's happening in the background, and for most people it's not salient. I don't like blaming the people—that feels really unfair. We really have to blame the systems that we've built that enable all of this surveillance.
And the public's role is crucial. There are gonna be laws that protect our data, protect our privacy, that limit what governments and corporations can do with our data. And that kind of political change is going to be slow in coming, but eventually that's where we're going to get our security. So when I think of what I want the public to do, is to be aware of these things, be aware of their pluses and minuses and agitate for political change.
Do you think political change will depend upon cryptography and the cypherpunks, the hackers, the internet activists?
Political change comes from people who want political change. Always. What the hacker community does, what the cypherpunks do, what crypto does, is give them infrastructure, give them technology to operate more safely more securely. I don't think you could ever say that crypto forces political change, but crypto helps enable it, possibly.
From a Stop Watching Us rally
I’m actually kind of anti-paranoid. But the more I see the more I wonder how much the people I used to think were paranoid were right.
It’s interesting that the US Navy helped to create a cypherpunk tool like Tor, which helps dissidents communicate anonymously, while the NSA and the FBI and others are also working to break it.
Tor's interesting because it really illustrates the dual missions of the NSA. Tor was funded by the government. It’s used by dissidents all over the world, used by reporters, used by people in repressive countries. Tor saves lives. Tor is a valuable tool for people on the internet going to places where the governments don't want them to. At the same time, the bad guys use Tor. Criminals use Tor. Pedophiles use Tor. Terrorist groups use Tor. So it’s just like any other piece of infrastructure. Open up a restaurant, and honest people can eat there, and bank robbers can eat there. You can say, "I’m gonna close this restaurant, so bank robbers can’t get food," but what about all the other people eating food?
Tor's like that—many good people use Tor. It’s so valuable, and that’s why the government funded it.
What we learned about Tor in the stories about the NSA, is that Tor works. Tor is secure. The NSA cannot break Tor. The guy who ran the Silk Road was arrested, but it’s not because Tor was broken. It’s because he made a bunch of mistakes. So I think it’s a really good and powerful story, and illustrates something that Snowden said in his first interview: that the math works, that cryptography does work. It's just hard to use it right.
What’s your favorite password?
What does that mean?
It’s the most common password. It used to be that “password” was the most common password. Now it’s “password1.” That number "1" fools all the hackers. Don’t tell them.
Is the new iPhone as secure as people say it is, do you think?
I don’t think we know. The iPhone in general is very secure. You know this because there is a black market for security vulnerabilities, and iPhone vulnerabilities are the most expensive ones, which means they’re the hardest to find.
How secure are our computer chips? There's been speculation about deeper levels of surveillance.
There's a lot of rumors about hacking chips. There’s rumors that the NSA has successfully infiltrated Cisco and Intel equipment. We have no idea, and no idea what it would mean. We have rumors that the Chinese have infiltrated Huawei equipment too. This is something where we just have rumors.
What kind of operating systems do you use?
I have an iPhone; I use a Windows computer; I have a Linux box. I do quite a lot of different things
And you also have a computer that you have not connected to the internet at all, is that right?
I do, and it’s for very special purposes, for looking at the Snowden documents. And this is just ultra-precaution because these are sensitive documents. I can’t do everything. I can’t protect against TEMPEST attacks [hacks that exploit systems using audio signatures], against black-bag operations against my house when I’m not there. But there are things I can do, and I’m taking extra precautions.
NSA PRISM slide 3
Largely, my security comes from not having anything particularly damaging on my computer. If a powerful government wanted into my computer, I don't think I could keep them out.
What kinds of things do you do in your daily life to protect yourself?
A lot of what I do isn't different than what everyone else does. I have a computer, I run anti-virus, I run a personal firewall. I try to have a good bullshit detector of what I should and shouldn't do. I don't do much that's different. That changed recently when I started having access to Edward Snowden's documents, and I'm doing other things that are a bit more complicated. I wrote a couple essays about them.
But largely, my security comes from not having anything particularly damaging on my computer. That's not great, but I recognize that if a powerful government wanted into my computer, I don't think I could keep them out. So, at that point, you just have to assume that you're not being targeted. And if you are, the data they're after is data you're just not going to have.
Do you get paranoid?
I don’t. I’m actually kind of anti-paranoid. I don’t think I succumb to paranoia, but the more I see the more I wonder how much the people I used to think were paranoid were right. And we don’t know. I do think a lot of what the NSA does is targeted. And it’s targeted against the people they should be targeting, but I do think there are some broad collection programs and those are the things I’m most appalled about.
Did you always assume that the government had this sort of power? Maybe more poignantly, what changed after the Snowden revelations?
Nothing in the Snowden documents, technically, is really a surprise. That's the interesting thing. But, it's all a surprise. I think it's because we never actually did the math. We should've had a way to assume the NSA would do this, and do that, but we never actually looked at the budget and calculated what that actually meant. And seeing it all there, laid out—the sheer expansiveness, the sheer scope—that was surprising, even though there's no one thing where we said "wow, it never occurred to me they could do that."
And maybe that's a failure on our part, but I think we all kinda did it. It always felt a little paranoid to talk about how the NSA might do this, the NSA might do that. And now, when we see these documents, this is what they're doing. And when you look at it, you say, yeah that makes sense. If you had a bunch of billion dollars, this is what you do. You give the best hackers the best budget and you get these sorts of programs.
There's this analogy I've been using. It's a tough one but I think it's actually accurate, and that's thinking about death. It's not a surprise, everybody dies—it's the way the story ends. Yet, when it happens, it's always a surprise. And that's because we just don't think about it. We decide not to think about it. It's almost a conscious decision. I think this NSA surveillance was sort of like that. We just didn't think about it very much except in generalizations. And now, what Snowden is doing is he's forcing us to think about it, very explicitly.
Are we thinking about it enough do you think?
Well, I am! (laughs) It depends who we are. In the security community, we're thinking about it a lot. In the broader political community, not so much. Getting out of the non-political world, probably not at all. And that's my worry, about whether or not we're gonna have change. I'm not sure that the magnitude of what's going on is precipitating out into the general popuace. And in some ways I'd be surprised if it was. These are very technical stories, these are very complicated issues. But we'll see. There is movement on Capitol Hill. This issue doesn't fall on normal political lines, which is good, so let's just see what happens.
Do you see a new, expanded role for a whistleblower in a scenario like ours?
I think that whistleblowing is a form of civil disobedience for the information age. The notions of secrecy have changed, and I think the NSA is learning this year what corporations learned five years ago: that you can’t keep secrets the same degree that you could in the pre-internet world. And whistle-blowing is a way that someone who has no power can stand up to power, can stand up to the man. And I think we’re going to see more of it. It’s very much a generational thing. Edward Snowden is 30; Chelsea Manning was 25 when she did the leaking. These sorts of generational gaps on how secrecy works are going to increasingly fracture, and we’re going to see more whistleblowing. Whistleblowing is how someone without power can get power.
We’re losing that safe-space to break the law for demonstration purposes that we have in the real world, and I think that is bad for society.
This sounds like the hacker too, the political hacker.
I’m not sure if hacking has become more political or newspapers are just reporting more political hacking. But I think it’s always been political, hacking. And you can go back into the '90s and the '80s and you can see different political hacking groups. Now it’s easier, I think, to get a platform, because the newspapers are reporting it, which makes it a more viable platform.
And I do see hacking as another form of civil disobedience. What’s interesting about both of these is the question—and I don’t know the answer to it—is that we’re moving to a world where you need a certain amount of tech skill to engage in civil disobedience. Not just anybody can do it. I think that would be sad.
It seems that, as with many new things, the law is struggling to catch up with our notions of this kind of civil disobedience.
And I think that’s unfortunate, that the law is really criminalizing civil disobedience. Civil disobedience traditionally was almost a theater: you would do something, it would be a sit-in, and you’d get arrested, because that was the police’s job. But you weren’t really arrested, you’d be taken downtown, you’d be booked, and you’d be let go. It’s a statement that requires the complacency of the police. We’re all doing this piece of theater to demonstrate what we’re demonstrating.
But now, there isn’t this hacking space for civil disobedience. So if you hack into a company to deface their website and make a political statement, as Greenpeace might’ve done physically, with a physical sign, you’re going to be charged with computer crime. To a much greater degree. And if you break into Apple’s iPhone in order to show a security flaw, you’re going to be charged to the same degree a criminal would. So we’re losing that safe space to break the law for demonstration purposes, that we have in the real world, and I think that is bad for society.
The laws are Draconian because we value computers. An interesting analogy is that back in the American West, there were a lot of areas where stealing a horse was punishable by death, a capital crime. And it kind of makes no sense, until you think about how important horses were to society. So the punishment didn’t fit the crime, because the infrastructure was valuable. And I think something similar is going on with computers and networks today. The punishments are overly Draconian, because we’re overly scared of the effects. So even if civil disobedience on the internet is harmless, inasmuch as Greenpeace is harmless, we’re punishing it more because we’re more scared of it.
Do you find time to get away from the internet? Has your own personal relationship with the internet changed since you've been grappling with these lines of power?
It hasn’t. The internet is where a lot of my socialization happens. I don’t get away very often because that’s where my friends are, that’s where I want to be, that’s where my work is, that’s where my writing is. I take a once-a-year annual no-CPU vacation, and they’re very enjoyable. But they’re also hardships. The internet is so essential in our lives that it’s hard to break away. And you don’t want to. You don’t get invited to parties. You miss the news of your friends. You're there because you want to be, not because you’re forced to be. And that’s why we need the internet to be safe and open and free.
More on security and hacking: