Department of Homeland Security employees work inside the National Cybersecurity and Communications Integration Center in Arlington, Va. Kevin Lamarque | Reuters

A $6 billion security system intended to keep hackers out of computers belonging to federal agencies isn't living up to expectations, an audit by the Government Accountability Office has found. A public version of the secret audit — a secret version containing more sensitive findings was circulated to government agencies in November — released last week concerns the Einstein system, formally called the National Cybersecurity Protection System and operated by the U.S. Department of Homeland Security. The GAO found that the system has limited capability to detect anomalies in network traffic that sometimes indicate attempts to attack a network. What it can do is scan for and detect attacks based on a list of known methods or signatures. Most of the signatures used to scan for the attacks are available in commercial-grade products, though a few were developed specially for the government. More from Re/code: Winning the 21st Century Cyberspace Race

Personal Data Control Is a Shared Responsibility



The Dragnet: How a Fraud Suspect Exposed a Secret Surveillance Device



The system relies only on signatures and doesn't use more complex methods for detecting attacks. It doesn't analyze anomalies or odd patterns in network traffic that might indicate an attack. Analyzing anomalies can sometimes be useful in detecting attacks using "zero-day" vulnerabilities, so called because they rely on weaknesses in systems that are completely unknown, giving defenders "zero days" to figure out how to head them off. "By employing only signature-based intrusion detection, NCPS is unable to detect intrusions for which it does not have a valid or active signature deployed. This limits the overall effectiveness of the program," the report reads. A security system that relies on signatures is only as good as the list of signatures used. Additionally, the system was properly deployed at only five of the 23 non-military government agencies for which it was intended. And only one agency had deployed it to scan for possible attacks in email, a common vector for attacks.

The stinging report provides a reminder of just how bad government agencies have been in protecting their computers and the sensitive data on them. Last year the federal Office of Personnel Management, the government's human resources branch, disclosed a data breach that revealed information on some 22 million people who had worked for the government. The information stolen dated back decades, and included fingerprint data on nearly six million people. Private sector researchers later traced the hack to a group based in China. It's also the latest proof that government agencies suck at securing their systems. The main reason for this is that agencies check off a list of vague requirements created by lawmakers and regulatory agencies when putting security in place. But they tend not to account for the risk that the requirements aren't sufficient. None of this is exactly news in government circles. A study by the security firm Veracode last year found that after discovering security flaws in the software they use, government agencies fixed them by applying patches only 27 percent of the time versus 81 percent for private companies. Why? Because no specific laws or regulations require it.