In light of the recent spate of high-profile hacking campaigns, and the overall poor state of security on the internet, NextGov.com reports that parts of the US government are advocating for a separate, “secure” internet. The idea calls for segmenting “critical” networks (not yet fully defined, but presumably including infrastructure and financial systems) and applying two security mechanisms to these networks: (1) increased deep packet inspection (DPI) to detect and prevent intrusions and malicious data; and (2) strong authentication, at least for clients. The trouble is that this “.secure” internet doesn’t make much technical or economic sense: the security mechanisms are simply not powerful or cost-effective enough to warrant re-engineering an internet.

Whether the idea is to apply different security policies to sites using a special domain name like “.secure” (and possibly the existing .edu and .gov domains), or to create a parallel internet infrastructure, is not yet clear. (Although government representatives say the idea is not to create a parallel infrastructure, that is the most “secure” form of the idea, and I therefore expect the idea to begin to incorporate elements of new, separate infrastructure for the most important networks as the idea matures.)

Intrusion Detection and Prevention

From the NextGov article:

Today, searches of the .gov domain are conducted by the Einstein program, an intrusion prevention and detection system under the direction of the Homeland Security Department that monitors only federal traffic for signs of unauthorized access. It alerts response teams to potential attacks and automatically blocks penetration in some cases.

The .secure network would apparently involve an increase in the use of intrustion detection and prevention systems (IDS and IPS). It’s not clear why increasing the use of such systems would require new legislation or even a special new network. Network operators can, and do, deploy such systems now on their own networks to protect their own sites. (And as we know, the government has no qualms about using DPI to surveil the entire country without a warrant.)

A problem is that IDS/IPS are very expensive to operate. Distinguished security engineers Bellovin, et al. explain why expanding the EINSTEIN system to cover much more ground is prohibitively expensive. IDS/IPS works best when carefully tuned for particular, relatively small networks.

Another problem is that IDS and IPS have limited applicability, for several reasons.

There is only a very weak global definition of “malicious” network traffic. Strong security assertions tend to be local to a particular application or network, rather than global to the network as a whole or to all applications. A network request that would destroy site A might be merely meaningless to site B, and possibly even normal functionality for site C. Some traffic is widely-agreed to be bad, such as the binary executable for a piece of malware. But even then, security researchers (including those working on the .secure network!) need to download malware to do their jobs.

It is very easy, sometimes even trivial, to encode malicious data in such a way that an IDS/IPS won’t recognize it as malicious, but will still have its evil effect on the target system. (Newsham and Ptacek wrote a pioneering paper on evading IDS/IPS. Hackers have refined the techniques since then, and IDS/IPS vendors have refined their counter-measures. But like signature-based anti-virus software, fundamentally this is a cat-and-mouse game that IDS/IPS systems cannot consistently or conclusively win against a motivated attacker.)

IDS/IPS, by their nature, tend to have very high equipment costs because they store and crunch huge amounts of data. Even more expensive is the salaries for the teams of network security experts that have to analyze the huge amounts of data. As a result, IDS/IPS tends to get defined down; network engineers tend to stop saying “intrustion prevention system” and start saying “post-breach forensic data source”. Obviously, having a way to do forensics is valuable in itself — but reliable, automatic intrusion prevention remains a dream.

IDS/IPS can perform a valuable security function when used correctly — with special care for cost-effectiveness. However, even in the best scenario, they are not powerful or effective enough to warrant fragmenting the internet.

The Fallacy of Authentication

The other component of .secure would be “strong” authentication. In particular, the idea is that there would be no anonymity, and presuambly no pseudonymity: a relying party (such as a bank web server or bank client) would be able to trust that their interlocutor was the “true” bank or client. This would supposedly deter hacking, or at least provide a more reliable forensic trail in post-breach investigations.

But what does “authentication” mean on the internet? People often (implicitly) take it to mean something like “Through this web browser I am talking to the true Wells Fargo Bank, and Wells knows that I am the true Chris Palmer.” However, when one computer presents credentials (such as a username and password pair or a cryptographic certificate) to another, the link between software data structure and real-world entity (like a person or a business) is weak. It is no stronger than the person’s or business’ ability to ensure that the computers on both ends are operating correctly and are not compromised, and that the channel between them is secure against network threats. From painful experience we know that our operating systems suffer from numerous design and implementation flaws, and that malware and system hacks are all-too-prevalent.

Unless the .secure network runs software far more advanced than the software that is currently available, malware will run as rampant, credentials will be as oft-compromised, and services and users will be as impersonated as on the real internet.

Conclusion

It’s not for EFF and other internet advocates to “raise privacy and speech concerns” about .secure. It’s for the proponents of this idea to show why it makes any technical or economic sense at all. For economic reasons (such as Metcalfe’s Law and economies of scale), networks tend to converge, not diverge. (We will probably use the same computers (and wires!) to connect to the real internet as to the .secure internet.) Not all participants in the .secure network will have the same incentives or the same ability to bear the opportunity and operational costs. And the costs are only justified if .secure prevents at least as much fraud and loss as it costs to build and operate .secure.

We do not yet know what those costs would be. In particular, we need to know precisely what the separation mechanism between the real internet and .secure will be. The strongest separation — a physically separate network with dedicated machines — is also inordinately expensive. The strength of the separation between the two networks goes down as, inevitably, interconnections between .secure and the real internet are created.

Weaker separation mechanisms, such as VPNs and alternate naming and routing schemes, will cost less. But they suffer the most attack modes, most notably including the menagerie of password-stealing malware on the real internet.

Many organizations already create networks that are in some sense separate from the internet and in some sense limited to trustworthy or rigorously authenticated users, and this can be a useful security measure. But ideas that can make sense at one scale do not necessarily make sense at larger scales. If the government wants to help out with internet security, they should use their vast purchasing power to push vendors for advances in basic software engineering quality. That benefits everyone in the most sustainable and economically productive way.