New Book Reveals Significant Cybersecurity Information Sharing Between Tech Companies And NSA; So Why Do We Need A New Law?

from the big-questions dept

On the day that Google’s lawyer wrote the blog post, the NSA’s general counsel began drafting a “cooperative research and development agreement,” a legal pact that was originally devised under a 1980 law to speed up the commercial development of new technologies that are of mutual interest to companies and the government. The agreement’s purpose is to build something — a device or a technique, for instance. The participating company isn’t paid, but it can rely on the government to front the research and development costs, and it can use government personnel and facilities for the research. Each side gets to keep the products of the collaboration private until they choose to disclose them. In the end, the company has the exclusive patent rights to build whatever was designed, and the government can use any information that was generated during the collaboration.



It’s not clear what the NSA and Google built after the China hack. But a spokeswoman at the agency gave hints at the time the agreement was written. “As a general matter, as part of its information-assurance mission, NSA works with a broad range of commercial partners and research associates to ensure the availability of secure tailored solutions for Department of Defense and national security systems customers,” she said. It was the phrase “tailored solutions” that was so intriguing. That implied something custom built for the agency, so that it could perform its intelligence-gathering mission. According to officials who were privy to the details of Google’s arrangements with the NSA, the company agreed to provide information about traffic on its networks in exchange for intelligence from the NSA about what it knew of foreign hackers. It was a quid pro quo, information for information.

...it lets the NSA evaluate Google hardware and software for vulnerabilities that hackers might exploit. Considering that the NSA is the single biggest collector of zero day vulnerabilities, that information would help make Google more secure than others that don’t get access to such prized secrets. The agreement also lets the agency analyze intrusions that have already occurred, so it can help trace them back to their source.

The NSA helps the companies find weaknesses in their products. But it also pays the companies not to fix some of them. Those weak spots give the agency an entry point for spying or attacking foreign governments that install the products in their intelligence agencies, their militaries, and their critical infrastructure. Microsoft, for instance, shares zero day vulnerabilities in its products with the NSA before releasing a public alert or a software patch, according to the company and U.S. officials. Cisco, one of the world’s top network equipment makers, leaves backdoors in its routers so they can be monitored by U.S. agencies, according to a cyber security professional who trains NSA employees in defensive techniques. And McAfee, the Internet security company, provides the NSA, the CIA, and the FBI with network traffic flows, analysis of malware, and information about hacking trends.



Companies that promise to disclose holes in their products only to the spy agencies are paid for their silence, say experts and officials who are familiar with the arrangements. To an extent, these openings for government surveillance are required by law. Telecommunications companies in particular must build their equipment in such a way that it can be tapped by a law enforcement agency presenting a court order, like for a wiretap. But when the NSA is gathering intelligence abroad, it is not bound by the same laws. Indeed, the surveillance it conducts via backdoors and secret flaws in hardware and software would be illegal in most of the countries where it occurs.

Starting in 2008, the agency began offering executives temporary security clearances, some good for only one day, so they could sit in on classified threat briefings.



“They indoctrinate someone for a day, and show them lots of juicy intelligence about threats facing businesses in the United States,” says a telecommunications company executive who has attended several of the briefings, which are held about three times a year. The CEOs are required to sign an agreement pledging not to disclose anything they learn in the briefings. “They tell them, in so many words, if you violate this agreement, you will be tried, convicted, and spend the rest of your life in prison,” says the executive.



[....]



But the NSA doesn’t have to threaten the executives to get their attention. The agency’s revelations about stolen data and hostile intrusions are frightening in their own right, and deliberately so. “We scare the bejeezus out of them,” a government official told National Public Radio in 2012. Some of those executives have stepped out of their threat briefings meeting feeling like the defense contractor CEOs who, back in the summer of 2007, left the Pentagon with “white hair.”

Unsure how to protect themselves, some CEOs will call private security companies such as Mandiant. “I personally know of one CEO for whom [a private NSA threat briefing] was a life-changing experience,” Richard Bejtlich, Mandiant’s chief security officer, told NPR. “General Alexander sat him down and told him what was going on. This particular CEO, in my opinion, should have known about [threats to his company] but did not, and now it has colored everything about the way he thinks about this problem.”



The NSA and private security companies have a symbiotic relationship. The government scares the CEOs and they run for help to experts such as Mandiant. Those companies, in turn, share what they learn during their investigations with the government, as Mandiant did after the Google breach in 2010. The NSA has also used the classified threat briefings to spur companies to strengthen their defenses.



In one 2010 session, agency officials said they’d discovered a flaw in personal computer firmware — the onboard memory and codes that tell the machine how to work — that could allow a hacker to turn the computer “into a brick,” rendering it useless. The CEOs of computer manufacturers who attended the meeting, and who were previously aware of the design flaw, ordered it fixed.

To obtain the information, a company must meet the government’s definition of a critical infrastructure: “assets, systems, and networks, whether physical or virtual, so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof.” That may seem like a narrow definition, but the categories of critical infrastructure are numerous and vast, encompassing thousands of businesses. Officially, there are sixteen sectors: chemical; commercial facilities, to include shopping centers, sports venues, casinos, and theme parks; communications; critical manufacturing; dams; the defense industrial base; emergency services, such as first responders and search and rescue; energy; financial services; food and agriculture; government facilities; health care and public health; information technology; nuclear reactors, materials, and waste; transportation systems; and water and wastewater systems.



It’s inconceivable that every company on such a list could be considered “so vital to the United States” that its damage or loss would harm national security and public safety. And yet, in the years since the 9/11 attacks, the government has cast such a wide protective net that practically any company could claim to be a critical infrastructure.

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community. Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis. While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Salon has published an excerpt from Shane Harris' new book (which looks excellent), @War: The Rise of the Military-Internet Complex . The specific excerpt is called: Google's secret NSA alliance: The terrifying deals between Silicon Valley and the security state , and it's an absolute must read. Frankly, Salon's title overstates the story. The article reveals more details about a ton of existing information sharing that goes on between the NSA and various tech companies to try to prevent malicious attacks from foreign threats (with the vast majority of them coming from China). The article focuses on some of the details behind Google's public admission that hackers in China had broken into Google's systems (as well as a number of other companies'). Harris' story reveals that Google's own tech team had effectively traced the hack back to some servers in Taiwan and hadthemselves, discovering more information about what the Chinese hackers were up to (and that they'd hacked many other companies).However, it also notes that this resulted in Google agreeing to work with the NSA on preventing such attacks in the future:There's much more in there, including that this isn't a program, like PRISM, that gives the NSA access to emails or other such information, but rather is focused on helping detect potential holes and security risks within Google's hardware and software:As the article notes, this is a pretty big concern -- because of what else the NSA might eventually do with this information. It raises serious questions about the tradeoffs here. Yes, it's good if the NSA can better protect online services from foreign attacks, but many people certainly consider the NSA a big risk as well. As the article also makes clear, the NSA likes to hoard certain security holes for its own use -- and these kinds of information sharing arrangements are a pretty big concern on that front.The excerpt notes, however, that the NSA has gotten really good at scaring the living daylights out of tech execs with special classified briefings, driving them into relationships with the NSA, separate from those kinds of paid relationships.This, in turn, leads them to team up with various private security companies, leading to a rather "symbiotic" relationship:That's an example where this kind of information sharing has been helpful in protecting the security of the public. And that's a good thing. But there are concerns about the costs on the other end, and really how trustworthy the NSA is on its end of these arrangements.But reading this excerpt, I kept going back to a key point in the big debates over the various cybersecurity bills that Congress has put forth in the past couple of years, mainly CISPA and CISA. In both of those bills, the key point that supporters kept making is that such bills were needed to facilitate "voluntary information sharing" between tech companies involved in "critical infrastructure" and the government (including the NSA -- though some of the bills have put Homeland Security in place as a filter rather than having it go directly to the NSA).But Harris' book seems to confirm exactly what many of us have been arguing for years: that there doesn't seem to be anything stopping companies from doing this sort of "voluntary" information sharing today, so why do they suddenly need new laws? The answer, of course, is one of liability. The new laws don't really knock down any regulatory barriers to sharing information: they just make sure that the companies can't be sued for those arrangements. Right now, it's not clear that companies would really be legally liable for these info sharing programs, but it can lead to lawsuits (and it wouldn't surprise me to see some class action suits being filed using Harris' book as evidence). The point of the cybersecurity bills is to put a blanket immunity on companies, which would then encourage them to do more of this kind of sharing, with the NSA providing "incentives" by scaring companies as described above.As for the promise of supporters of these bills that it's only focused on "critical infrastructure" and not the rest of the web? Harris tackles that issue as well:There's a lot more in the excerpt, and I assume amore in the book itself, which seems worth reading. It delves deeply into these relationships and how the NSA gets access to lots of information from telcos and tech companies. Again, actually protecting US infrastructure seems like an important goal, but from all of this, it's not clear how clearly the tradeoffs are recognized. More specifically, it seems quite troubling that this is being done by the NSA.It isthat the dual functions of the NSA absolutely must be split. The "cyber" protections side and the surveillance side need to be separated. Having the online protection side is important in protecting infrastructure, but tying it to the same organization looking for holes to spy on others just makes us all less safe. Furthermore, it makes it abundantly clear that no new cybersecurity laws are needed, since these companies are already quite free to share information with the government for the sake of cybersecurity.

Filed Under: cisa, cispa, cybersecurity, information sharing, nsa, shane harris, surveillance

Companies: google