After massive leaks of State Department cables to Wikileaks, we were told that the State Department had joined an information sharing network to which 500,000 people had access. All of these people had Top Secret security clearances, but at least one seems to have given material to Wikileaks.

The perpetrator wasn't caught by alert counterespionage agents, he bragged to a fellow hacker who ratted him out.

Whether you believe that Wikileaks is on the aide of the angels or the opposite, the State Department and other government agencies don't like having their intimate, informal thoughts spread out for all to see. Thus, government folk are going to exert strenuous efforts to try to keep such leaks from happening in the future while continuing to fight terrorism effectively.

That's a tough assignment. To understand why, let's look at the way classical government security worked and compare that with what we need now.

Need to Know vs Connecting Dots

My first job out of college was working on nuclear missile systems. My assignment involved writing software for the Poseidon Guidance Computer (PGC). This was in the days of Multiple Independent Reentry Vehicles (MIRV) where the missile carried a number of warheads and dealt them out, one for you, one for you, like gamblers dealing cards in Vegas.

Most everything we did was highly classified with information shared on a strict "need to know" basis. We PGC programmers had no idea how the engines worked. Like a programmer working with a sound card, we needed to know the signals to send the engines to go from low thrust to high thrust, but we knew no more than that.

We had no idea how the warheads worked. We knew the signal to tell a warhead to get ready to drop down the Kremlin's smokestacks and how high off the ground to go Bang! but we didn't know any more - the warhead could have been a lump of lead for all we knew. We were forbidden to know anything we didn't need to know.

My legacy from that experience is that, having been a rocket scientist in good standing, whenever I declare, "That's not rocket science," I speak with authority.

Even outside of the classified world, "Need to Know" is standard engineering practice. Suppose you're in charge of designing the Boeing 787, which is about as complex, sophisticated, and expensive for today as the Poseidon ballistic missile submarine system was forty years ago. No one individual can possibly understand all the details of the engines, the avionics, the wings, the fuel system, the galley - you have to delegate and compartmentalize the design team.

All people are limited in what they can learn, so you have to design the subsystems and interfaces so that people can learn enough to work together, both as experts on their own specialty area and as relatively ignorant users of the other areas they have to interact with.

"Need to know" makes complex engineering projects possible, but there are risks in compartmentalizing information. The shuttle launch team didn't realize that freezing the O-rings would make the engines fail. They lit off in cold weather, and the Challenger blew up.

The philosophy of "need-to-know," as essential as it is for any complex project, doomed the shuttle. Likewise, though vital for security, "need-to-know" simply won't work in a world where our guardians must "connect the dots" to fight terrorism.

There was much criticism of the TSA and its surrounding infrastructure when the "panty bomber" was apprehended by an alert passenger. The bomber's father had told one of our embassies that his son was becoming radicalized. This data was doubtless conveyed to headquarters in a telegram very much like the telegrams that are being posted on Wikileaks.

In the post-Wiki brouhaha, the State Department disconnected its cable system from the more open network via which the Wikileak data were obtained. Instead of making their data available to anyone with clearance, the Department intends to share on a "need to know" basis.

Unfortunately, "need to know" is known not to work for intelligence data gathering. Back in the pre-Google era, a search engine venture named Alta Vista started. The premise was that since simple text search wouldn't work, Alta Vista employees would collectively read the entire World Wide Web and manually index it. You could find what you wanted by looking at their index.

That venture failed for three reasons:

The Web was far too big even then, and growing far too fast to be indexed manually. Human indexers couldn't agree on how to characterize most documents; the index wasn't representative of document content. Some very bright bulbs at Google figured out how to make "simple text search" work well enough to give us most of what we want.

The rest, and Alta Vista, is history.

The Google Way of Enlightenment

Think about how you find information on Google when you have no idea what the person who wrote the paper you want called the problem. You do a search and read a paper or two. This suggests other words so you search for those, and on you go. It doesn't take long to zero in on the words the experts in the field use and you find what you need.

Could those authors, or any indexers, have known that you'd need to know? Of course not. The authors never heard of you, and if they did, they wouldn't imagine that you'd ever need to know anything about their field.

Could the Kenyan embassy have known that the TSA would need to know about the person who eventually became the panty bomber? Of course not, but someone googling State Department cables for names of people buying tickets for cash would have found the report. State Department cables weren't part of the TSA-accessible treasure trove at the time, of course, so it didn't happen.

The "security" issue comes on top of all the institutional obstacles to sharing data, of course. Now that the State Department isn't part of the network any more, the same sort of incident will inevitably recur - and no doubt the same bleating about "information sharing" will ensue, at least until the next Wikileaks dump shoves the pendulum back the other way.

No Solution Now

For the moment, there's simply no solution to the problem of finding needles in haystacks which can't be searched freely and rapidly. We're told that there are about a half-million cleared individuals with access to the network from which the Wikileaks trove came. We're also told that Pvt. Manning is the sole perpetrator of the leak.

If both of these assertions are true, our security system is pretty good - only one person in a half-million leaked, but that's not good enough. As our security professionals tell us, we have to be perfect every time, whereas the bad guys have to succeed only once.

"Need-to-know" is known not to work for fighting terrorism, but we have nothing between that and a system open to a half-million supposedly trustworthy individuals. Suggestions, anyone?