Why doesn’t cybersecurity icon Dan Geer carry a cell phone? If he doesn’t understand how something works in detail, he says, he won’t use it. Yet he’s no Luddite: as chief information security officer at In-Q-Tel, the nonprofit venture arm of the CIA, Geer has one of the clearest views of the future of security technology. His personal vision? To put those technologies (as well as new laws and policies) to work in ways that governments and corporations around the world today are too feeble, dysfunctional, or corrupt to implement themselves.

Geer argues that the EU’s “right to be forgotten” doesn’t go far enough, that software needs liability policies, and that governments should buy and disclose all zero-day vulnerabilities to prevent countries from stockpiling cyber weapons. Geer’s ideas (outlined in 10 points

he proposed in his keynote at the Black Hat USA 2014 conference) don’t win him many friends in policy or software development, but they’re certainly aligned to a core belief that is tough to argue: Sticking with the current status quo is a dangerous path to follow. #MakeTechHuman talked to Geer about what the better road might look like.

Let’s start with an optimistic question. When it comes to privacy and security, what should we not be worried about right now?





Let’s take phishing e-mail for a second. As you probably are aware, with phishing e-mail, the people who do it are getting better. They don’t misspell as often, their grammar is good, the plausibility background of their story is getting really, really good. On the other hand, I can honestly tell people that I know really well, my children for example, it’s going to be really hard to fake one of my daughter’s e-mails to me, because there’s a certain style that I would recognize no matter where I was.

So there is a kind of bifurcation now that says you can, as a human, recognize communications from a limited number of people that really are people that you really do know on a close basis. But the original Internet dream—and I’m not making fun of it, let me be clear—was suddenly you could talk to anybody on the planet. That’s still true, but you should be careful when you talk to anybody on the planet, because the provenance of that is quite unclear. So we’re beginning to, in effect, enhance the value of communications from people we know because we are beginning to devalue the communications from people we don’t know.

What’s the most outlandish invasion of privacy you’ve seen?

What I would call most egregious has to do with the aspect of data fusion. If I were to go in a room, if I were in front of an audience, and I’ve actually done this twice and described it many more times than that in front of other audiences, then you pick someone from the audience and you say, “I’d like to ask you an embarrassing question.” And so you ask them an embarrassing question. Not terribly embarrassing, by the way, just something like how many pairs of unused underwear do you have in your drawers. And almost always people will answer. But if you keep asking questions, they eventually balk. And the reason they balk is because the sum of the answers is greater than the parts. And when I say data fusion, I mean the ability to take data from disparate sources and put it together.

Bruce Schneier had a very interesting comment on the privacy front. He didn’t ask it, he stated, but I’ll pretend that he asked. And that is, in the equation of privacy, is every additional piece of information, every additional mechanism of observing you—first we have your telephone metadata, now we have drone pictures of your house, now we have the searches you did at Google, now we have on and on—does each additional avenue like that contribute to a linear sum? Or is it the exponent of that equation? I actually side with the argument that it is greater than linear and maybe even an exponent.

I would argue that what the marketing people are trying to do, which is to develop a complete picture of the individual, such that everywhere you go there’s an advertisement that has your name on it, I would argue that that’s egregious, in the sense that for that activity, there is no scientific distinction between targeting and personalization, except for the intent of the analyst. And so I think it’s egregious, not because I’m affronted by it per se, but because it’s building an apparatus that would allow, in effect, my view of the world to be completely personalized and, as such, how would I know? Now we’re in the The Truman Show. If all of your interactions with the rest of the world were personalized, how would you know?

Why doesn’t the EU’s “right to be forgotten” go far enough?

What I think you have a right to do is to be unobserved by facilities more complicated than the human eye and ear. I am well aware that the number of avenues of observability is expanding.

It isn’t that I really want the right to be forgotten. What I want is the right to be not recorded. And I want that to be understood to mean, oh, nothing exotic, like you can’t look at me or you can’t listen to me if I’m mumbling to myself on the street. I don’t mean something crazy like that. I just mean the idea that I would prefer to think that when I choose to have other people hear me, or see me, or whatever, I have chosen that, but that it is not something that other people can choose to do.

It is a slight distinction here that is probably important, and that is what is your definition of privacy versus secrecy. One can argue that privacy is something that others give you; secrecy is something you take for yourself. And I think a great deal of the interest in encryption, for example, is because people realize that they’re not going to be given privacy, so they have to acquire secrecy as a fallback. And I’m of that character, in effect.

Why should the U.S. government buy and disclose all zero-day vulnerabilities?

Bruce Schneier asked a coherent question on that, too, which was, “Are security vulnerabilities sparse or dense?” If they are sparse, then finding them and closing the holes is useful. If they are dense, finding them and closing them is not useful, because you’re just wasting your effort. I happen to think that exploitable vulnerabilities, which are the ones that actually matter, are probably relatively sparse.

Now, how do you find them? The answer is you find them through hard work. And we have made it too hard to find vulnerabilities as a hobby. It has to be a job. And there are lots of people whose job it is, and some of those whose job it is are nice and some are not nice. What can we do about that? I think the answer is markets. What I suggested was the U.S. government being who it is and what it is—why don’t we say show us an exploitable vulnerability [Heartbleed is an example of this], show us a competing bid [there’s a black market for these], we’ll pay 10x. We are in a financial position to flat out corner the market.

And I would suggest that if we did that, the one requirement for that process would be and then we make it public. Maybe we don’t make it public the same day, maybe we do it next week, maybe we give the manufacturer who is about to be embarrassed like nothing else a chance to pull up their pants. I’m OK with that. I’m OK with all the rules that come with responsible disclosure. I’m OK with saying we found a whopping flaw, it’s going to take six weeks to fix it, we will hold the information for six weeks, but not a minute longer.

In this process, one, we have forced vendors to fix the flaw, and two, what if country XYZ is accumulating cyber weapons? We have just erased one of their cyber weapons if they knew about it. And if they didn’t know about it, we’re no worse off than we were at the beginning.

You try to stay off the network as much as possible, carrying a pager and no cellphone. How do you evaluate technology in your personal life?

I am getting older, and I have to allow for the fact that perhaps that explains everything, though I don’t think so. I am, as a rule, skeptical of coming to rely upon things that I don’t know how they work. If there’s anything that I’ve come to be relatively adamant about is that, as humans, we have repeatedly demonstrated that we can quite clearly build things more complex than we can then manage, our friends in finance and flash crashes being a fine example of that.

Given what I know in the cyber security arena, the number of things that, in effect, nobody understands how they work causes me to say, well, then why do I want to depend on it?

Join the conversation in the comments section below and let us know how you feel about the current state of cyber security.