Some time ago I began an answer to Jack Goldsmith on why I thought cybersecurity regulation was the wrong answer to our current cyber problems. Other commitments, including paying clients!, got in the way of further developing the argument, but I have some time now to return to the discussion.

In my introductory post on this topic, the first point I made is perhaps the most controversial – I don’t think that the cyber threat as extreme (what I called “existential”) as some others do. And because I am skeptical about the graveness of the threat, I am doubly skeptical of the need for a new regulatory leviathan.

Let me begin by acknowledging that mine is a minority view, and also that many people whose opinions I deeply respect disagree with me. I am particularly cognizant that some with access to classified information to which I am not privy (like Cyber Command General Keith Alexander) are persuaded of the depth of the threat, a circumstance that has given me a significant degree of self-doubt. Nonetheless, after great reflection, I am simply not persuaded that the threat to critical infrastructure (CI) is so great as to necessitate a new DHS-led regulatory process. If I thought it were – if I thought that there was a realistic possibility that significant swathes of CI can and would be taken down by outside threats – then I would consider the threats existential. I might still oppose regulatory intervention (on the independent ground that I don’t think it would work), but I would recede from my belief that it is unnessary. But the case for the graveness of the threat just hasn’t been made – at least not to my satisfaction.

First, let me be clear – there is virtually no dispute that cyber vulnerabilities are real. Nor is there any reason to doubt that cyber crime (fraud, identity theft, and theft of intellectual property) is rampant. And there is also no doubt that, despite its denials, China is engaged in a rampant program of cyber espionage, targeting American national security-related institutions and private sector intellectual property.

But one should be suspicious when those instances of cyber intrusion are seen as a justification for a cyber regulatory structure. Often determining the right solution depends on asking the right question. If the question is “how do we reduce cyber crime?” the answer is likely some combination of stronger punishments (for deterrence); more and better investigative tools and (in this context) enhanced international cooperation. Saying that regulating a baseline standard of cybersecurity is an answer to cyber crime is a bit like saying that the answer to street crime is a mandate for bars on the windows or stronger door locks. Those are nice ideas – but nobody would think in the criminal context that they were a fit subject for government regulation, or that government regulation of the behavior of innocent actors was the optimal means of thwarting criminal behavior.

Indeed, those who are victims of theft have ample reason to protect themselves, and little claim to adoption of a system where the government forces them to be careful. And to the extent that others (like a Bank) are responsible for the loses of their customers it is increasingly likely that a system of commercial liability will create the right incentives to be protective against criminal behavior.

Or, to put it another way the costs of cybercrime are quite significant. The data are thin but what we can see is that annual losses are on the order of $388 million (according to Symantec) to $1 trillion (a McAfee estimate). Substantial numbers to be sure, but not threats to the very fabric of the world wide economy which is an estimated $ 72 trillion of GDP annually. If cyber crime were the only problem we faced, regulation would not likely be the answer we chose.

Likewise, if espionage is the problem, then the answer is exactly what we are doing now – expanding the protective national security umbrella around critical members of the defense industrial base. Here another factor comes into play – that private sector actors actually do have the right incentives to keep control over their own intellectual property (or, in the case of federal contractors, the US government can create those incentives by operation of contract). Those who support regulation in this aspect of the domain need to explain why the voluntary Defense Industrial Base pilot (initiated by DoD and just recently transferred to DHS) is inadequate. Lockheed Martin (to take an example) is more than capable of and more than incentivized to maintain its own cyber-integrity. If espionage is the problem, broad regulation doesn’t seem to be the answer either. Indeed – if we fear national security espionage and we think regulation (rather than contractual obligation) is necessary, why not regulate just that sector?

So, just to sum up the argument thus far – much of the impetus for regulation relies on the widespread existence of serious vulnerabilities. But most of the consequences of those vulnerabilities – theft and espionage – are not, it seems to me, an adequate justification for a regulatory system.

The only argument in my judgment that justifies a regulatory intervention is the fear of something more catastrophic – not IP theft or even the theft of national security secrets, but rather the threat of economic disruption on a grand enough scale that the US government would be disabled from responding to external threats, or severely dissuaded. The model here is the idea that if China gets in to a conflict with Taiwan then the US might be prevented from intervening militarily by virtue of the vulnerability of our CI. Or, alternatively, the NSA might be preemptively disabled by a takedown of the BG&E electric grid.

To this I think it is worth making three points in response. First, and foremost, the risk of catastrophic failure on a broad scale within our CI is possibly overstated. Let’s take, as a starting point, the electric grid which is often thought of as the most vulnerable network. Even though the grid is highly interconnected, the literature from inside the electricity generation community suggests that true catastrophic failure of the grid is unlikely. To be sure, they have an incentive to downplay the risk, but it is the best data awe have.

As a 2004 CRS report put it, the damage resulting from a cyberattack would be comparable to other instances of equipment failure, for which plans and procedures are already in place. To be sure, those plans are not perfect (see – Washington DC/Derecho/2012) but they are useable whatever the cause of the outage. And, of course, as of today in the US there have been no reported terrorist cyberattacks on industrial control systems that have caused significant, publicly reported, damage (note the caveats – but I would have thought that any successful attack would have been publicized as a way of generating public awareness of the threat).

According to Sean Gorman of George Mason University, the idea that the nation’s infrastructure is at risk of cyber terrorism is suspect -- terrorist organizations are more likely to follow the path of least resistance and low cost – using physical attacks and bombs. . A study completed by the U.S. Naval College (cited by Gorman) attempted to simulate a “digital Pearl Harbor” attack on the nation’s critical infrastructure and found that “a group of hackers couldn’t single-handedly bring down the … infrastructure, but a terrorist team would be able to do significant localized damage to U.S. systems.”

One electric grid analyst, Jacob Feinstein, asserts (chapter only available with an account, I’m afraid) that widespread failure of the power distribution system is virtually impossible to achieve. Because of the self-healing characteristics of the power system, large scale blackouts rarely result. According to Feinstein, the famous 2003 blackout (where a fairly large blackout did occur) demonstrates that the power grid is not of significant interest as a terrorist target. Since the 2003 blackout did not cause an economic system collapse or significant civil unrest, injury, or death, the power grid is arguably not an attractive target, though obviously “the economic and societal consequences of a long-term blackout depend on the area affected and the duration of the outage.”

I haven’t done the same sort of dive into literature about, say, the water treatment systems of the United States, but my instinct has to be that they are, if anything, even less comprehensively vulnerable. The electric grid is at risk precisely because of its broad interconnection. Other systems (like water treatment or subways) are discrete and not linked in a single network. Thus, a large scale attack that took out more than one such system would have to be highly coordinated and very broad-based in its degree of intrusion. That’s likely to be far more difficult to achieve and also, conversely, more readily observable on the defensive side.

Second, most of the data we have about the level of intrusions simply overstates the threat and substitutes counting for analysis. Typical is the ICS-CERT Incident Summary Report, issued just a few days ago, by the Department of Homeland Security. You almost certainly read about it because it breathlessly reported a quintupling of cyber incidents (from 41 in 2010 to 198 in 2011) involving the utility sector. Scary stuff.

Color me, skeptical.

For one thing, the big jump is at least partially attributable to better reporting systems. Now that the government is in the business of collecting this information the private sector is increasingly willing to provide it. I have no idea (nor does anyone else) how this effects the counting. [And, though it isn’t strictly relevant to my point I find it deeply amusing that one of the incidents reported for 2010 is, of course, Stuxnet – an intrusion in the US for which our own government is likely responsible. Hardly a poster child for the clarion call of greater vulnerability!]

For another, many of the incidents involved a single systemic flaw -- an “Internet facing control system [that] employed a remote access platform from the same vendor, configured with an unsecure authentication mechanism.” The lack of security is a legacy of a time when ICS were not built for security at all – a legacy that is changing with or without regulation.

But the most notable part of the report is that …. Well, nothing actually happened. All of the vulnerabilities identified and the intrusions attempted reflected a contingent vulnerability but not reality. And even the contingencies seemed, on closer examination, pretty remote. Here is what the report said about the most common form of attempted attack -- a spear-phishing email: “Sophisticated threat actors were present in 11 of the 17 incidents, including the actors utilizing spear-phishing tactics to compromise networks. These threat actors were responsible for data exfiltration in several cases, which seems to have been the primary motive for intrusion. No intrusions were identified directly into control system networks. However, given the flat and interconnected nature of many of these organization’s networks, threat actors, once they have gained a presence, have the potential to move laterally into other portions of the network, including the control system, where they could compromise critical infrastructure operations.” In other words, most of what we saw was theft of IP; no control systems were infiltrated; but the possibility of infiltration exists because the email systems are interconnected to the control systems.

To which the only answer is: do we need regulators to tell us that good cyber hygiene involves isolating your critical ICS/SCADA systems that operate CI? Maybe those who support regulation think we do. If so, I’d welcome data showing that – that is, electric grid companies that haven’t taken these steps yet. And, to the extent regulation is necessary … well, we already have the guidance.

And now, a final broader point: I think that this entire line of thought mistakes vulnerability for risk. Yes, vulnerabilities exist – even for catastrophic CI attacks. And the consequences of such an attack would be severe (though how much more severe that this past week’s “Derecho” is debatable). But vulnerability isn’t risk – you have to find someone who actually wants to implement a threat and has the capability to do so.

Right now there aren’t a lot of “someones” out there. Just reflect for a moment on what it took to make the Stuxnet virus operational. According to the public reports (which, I hasten to add, is the only basis for my knowledge!) the creators of Stuxnet had a detailed insider’s knowledge of the way in which the Siemens’ controllers worked, suggesting either active assistance from the company or a significant intrusion there in the first instance. That knowledge was not, however, enough. A broad-scale cyber espionage program active over the course of several years (known as Flame) was required to closely map the Iranian cyber systems to discover the precise contours of the vulnerabilities to be exploited. Mock-ups of the centrifuge system had to be built (using, allegedly, centrifuges received from Libya back when Libya was trying to be nice to the West) and tests run to prove the viability of the virus. Four separate zero-day exploits had to be identified and incorporated – a wildly profligate use of zero days. Once the program was ready, then some type of espionage (a human agent? Social engineering?) was needed to get the virus inside the air-gapped Iranian system. Plus, of course, someone had to have the cyber-jedi skills to write the code in the first place.

This is most assuredly not the stuff of a small scale hacker group or terrorist cell. It’s the product of a nation state. So …. Do we really think China’s going to do this to us? When we can do it to them in return?

I don’t mean to sound overly sanguine about the prospect of a catastrophic cyber intrusion on CI. Certainly, the vulnerabilities exist. Indeed, in 2007 the National Research Council said that:

high-level threats—spawned by motivated, sophisticated, and well-resourced adversaries—could increase very quickly on a very short time-scale, potentially leading to what some dub a “digital Pearl Harbor” (that is, a catastrophic event whose occurrence can be unambiguously traced to flaws in cybersecurity)—and that the nation’s IT vendors and users (both individual and corporate) would have to respond very quickly when such threats emerge.

But what is striking to me is that this prediction was made in 2007 – almost 3 generations ago in terms of cyber processing power – and yet nothing has happened. I am deeply and painfully aware of all of the people who have gone wrong in predicting technological developments – the folks who said planes would never fly or that battleships would never be sunk by airplanes – and the equally dismal record of those predicting future political developments (the Soviet Union will never fall). But I can only work with what I have in front of me and my bottom line on the cyber threat is this:

Today, right now, the only actors capable of even thinking about a large-scale crippling cyber assault are nation states. The likelihood that they will do so is roughly the same as the chances of a large scale kinetic war. If you think we are going to get into a shooting war with China anytime soon then, by all means, be afraid. But if, like me, you think the chances of a kinetic conflict with nation-state peers is slim then so, too, is the chance of a cyber war. And for now, today, the chaotic actors who we might fear more (like Anonymous or terrorists) just don’t have these capabilities. When they will get them is deeply and radically uncertain.

And so, since I think that the possibility of a catastrophic CI attack is exceedingly slim and since (as I’ll explain in the next post, whenever I get a chance to do it) I think that the costs of a regulatory system of sufficient robustness to be of any value will be very high (in terms of direct costs to consumers and in terms of indirect costs through lost innovation) for me the game just isn’t worth the candle. Without a better case for cyber CI catastrophe as a realistic possibility – and not just a theoretical vulnerability – I’m not persuaded a regulatory system is needed.