If your code gets hacked, are you the one on the hook? In the early decades of the software industry, the answer was usually "no." Software licenses routinely disclaimed liability, and until recently, security flaws were considered to be just another fact of life. When problems were discovered, companies were expected to fix them quickly, but they were rarely on the hook for the resulting damage.

That's changing rapidly. Recently, Sony faced a class action lawsuit for losing the private information of millions of users. And this week, it was reported that Dropbox is already being sued for a recent security breach of its own.

It's too early to know if these particular lawsuits will get anywhere, but they're part of a growing trend. As online services become an ever more important part of the American economy, the companies that create them increasingly find that security problems are hitting them where it really hurts: the bottom line.

Computer security has also been an area of increasing activity for the Federal Trade Commission. In mid-June, FTC commissioner Edith Ramirez testified to Congress about her agency's efforts to get companies to beef up their online security. In addition to enforcing specific rules for the financial industry, the FTC has asserted authority over any company that makes "false or misleading data security claims" or causes harm to consumers by failing to take "reasonable security measures." Ramirez described two recent settlements with companies whose security vulnerabilities had allowed hackers to obtain sensitive customer data. Among other remedies, those firms have agreed to submit to independent security audits for the next 20 years.

The world in which software companies could safely treat security as an afterthought is gone—but it's not yet clear what will replace it. Class action lawsuits and FTC enforcement actions are two possible mechanisms for getting companies to take security seriously. But there are other candidates, including prospective security audits, education, and data retention rules. The right rules will encourage companies to take security seriously, but too much regulation could unduly hamper the software development process.

A culture of security

Ars asked Alex Halderman, a computer science professor at the University of Michigan, to help us evaluate these options. He argued that consumer choice by itself is unlikely to produce secure software. Most consumers aren't equipped to tell whether a company's security claims are "snake oil or actually have some meat behind them." Security problems therefore tend not to become evident until it's too late.

But he argued the most obvious regulatory approach—direct government regulation of software security practices—was also unlikely to work. A federal agency like the FTC has neither the expertise nor the manpower to thoroughly audit the software of thousands of private companies. Moreover, "we don't have really widely regarded, well-established best practices," Halderman said. "Especially from the outside, it's difficult to look at a problem and determine whether it was truly negligent or just the kind of natural errors that happen in every software project."

And when an agency found flaws, he said, it would have trouble figuring out how urgent they were. Private companies might be forced to spend a lot of time fixing trivial flaws while more serious problems get overlooked.

Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it's hard to mandate, or even to measure, "security consciousness" from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it's not likely to be effective unless management's heart is in it.

This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year's attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don't repeat that mistake in future.

Other tools

Still, Halderman warned that too much litigation could cause companies to become excessively security-conscious. Software developers always face a trade-off between security and other priorities like cost and time to market. Forcing companies to devote too much effort to security can be as harmful as devoting too little. So policymakers shouldn't focus exclusively on liability, he said.

Another strategy is to require transparency. Some states already have laws mandating the disclosure of data breaches. Halderman suggested that these rules could be extended to cover other kinds of security problems, which would give consumers more information when deciding which software or services to choose.

Halderman pointed to the value of education, which the FTC has also emphasized. Obviously, companies have a responsibility to train their own programmers about security. But he argued that universities also have an important role to play, and that security training should be part of every college's computer science curriculum.

Finally, Halderman said that companies should minimize the amount of information they hold about their users. This won't prevent security problems, but it will reduce the harm from security breaches that do occur. He argued that companies should automatically delete data that's no longer needed, and that consumers should be given the option to examine and delete information companies hold about them.

The recent LulzSec rampages make it clear just how much work is needed to secure websites and corporate networks from attack. Corporate America may or may not be afraid of hooligans like LulzSec, but it definitely understands the dangers of a big class action lawsuit. Liability leverages one of the most powerful forces on the planet—corporate self-interest—in the cause of computer security.