A computer security researcher who has inadvertently violated the law during the course of her investigation faces a dilemma when thinking about whether to notify a company about a problem she discovered in one of the company’s products. By reporting the security flaw, the researcher reveals that she may have committed unlawful activity, which might invite a lawsuit or criminal investigation. On the other hand, withholding information means a potentially serious security flaw may go unremedied.

There are no easy answers for the ethical hacker who has wandered off the straight and narrow into the legal thicket of computer offense laws. Among a set of undesirable choices, the ethical hacker may choose to reconstruct her research using software, devices and networks to which she has authorized access and report based on this whitewashed reenactment of the discovery. She may also choose to report the flaw in general terms that identify the problem, without revealing the compromised research path she used to discover it. Neither option perfectly protects the researcher while ensuring the problem will be fixed. One or the other, however, may be the better of bad options.

Despite the value computer security professionals provide by testing software and networks for exploitable vulnerabilities, research activities can violate a number of complicated or obscure regulations and statutes. Those laws include:

Computer Fraud and Abuse Act

Anti-Circumvention Provisions of the DMCA

Copyright law

Other state and international laws

The Computer Fraud and Abuse Act prohibits unauthorized access to computers. Researchers who perform testing on systems they do not own can run afoul of this law. Moreover, the law has been misused to prosecute use of computer services in violation of a terms of service notice and even the act of vulnerability reporting itself.

The anti-circumvention provisions of the Digital Millennium Copyright Act create a potential legal obstacle for a researcher studying mechanisms that control the way copyrighted software or other materials can be accessed or used or software protected by such a mechanism. Potential litigants have invoked the anti-circumvention provisions not just where traditional “digital rights management” (DRM) is concerned, but also in cases involving reverse engineering an undocumented protocol.

U.S. copyright law regulates copies made during reverse engineering. States and other countries have their own computer crime laws, and researchers can hardly be expected to know all of them. For example, the United States prosecuted Dmitry Sklyarov and Russian company Elcomsoft under the DMCA for creating a reader for Adobe eBooks. The product did not violate any Russian laws.

Because the regulatory regime is complicated and non-intuitive, security researchers may have more reason to worry about legal challenges than other scientists. Potentially, a researcher may unintentionally violate the law through ignorance or misplaced enthusiasm, or an offended party can stretch or misuse the law to challenge research that casts its products or services in a negative light.

This is why we recommend that security researchers consult with an attorney before doing potentially risky research. A good attorney can help you avoid common legal traps and if you identify the legal risks upfront, you may be able to adjust your research plan to mitigate or completely avoid problems.

The researcher is in a quandary when she has potentially broken the law, but never intended to steal information or invade privacy and wants to see the problem fixed. Reporting the information raises a red flag that could result in an investigation and civil claims or even criminal charges. Keeping quiet means that the flaw will go unremedied and potentially could be exploited by someone who does have criminal intent. What is the grey hat hacker to do?

Some companies have made it clear that they generally will not pursue legal action if a non-malicious person brings vulnerabilities to their attention, regardless of the way in which the flaw was found. Companies that frequently deal with vulnerabilities are often the most open and the least offended by researcher missteps. They have seen it all before, they want to improve their products, and they don’t want the bad press that comes with suing well-meaning hackers. Companies that make their money in ways other than software distribution are often new to the vulnerability reporting game. It is these companies that are more likely to panic, overreact and bring out the lawyer when confronted with information about flaws in the product. For the researcher, this often means guessing about the reaction of the party to which the report should go.

To moderate the legal risk, researchers may consider using some intermediary as a go-between. In past instances, a researcher has given the vulnerability information to a lawyer or journalist and asked that person to pass the information along to the vendor. Other researchers have considered applying to give a talk at a security conference and asking the presentation selection committee to approach the vendor, or even selling the vulnerability information to a brokering company that will disclose to the vendor, and then to its customers, and finally to the public. None of these possibilities is a perfect solution, however. Each approach presents the risk that an angry vendor will seek to discover the researcher’s identity from the intermediary. Neither conference organizers nor vulnerability brokers have any legal privilege to refuse to disclose the researcher’s identity if a court deems it relevant to a legal proceeding. Journalists have no such privilege under United States Supreme Court precedent, though they may under some lower court rulings or state law. Lawyers have the strongest right to refuse to provide identifying information under the attorney-client privilege, but it is not clear that client identity is protected information for all purposes.

The researcher might consider resorting to encryption to hide her identity while reporting relevant details to the affected parties. Anonymity may exacerbate some of the common problems for vulnerability reporters, like being taken seriously, but the particular problem for the grey hat hacker is to give enough information about the research without pointing to an evidence trail that will reveal the offense and the researcher’s true identity. This is why the researcher may want to recreate the discovery of the flaw using only software, devices and networks she owns and/or has permission to use, and report that “discovery.” Alternatively, the researcher may have to leave out details that would in effect serve as a trail of breadcrumbs leading to her door, and hope that the information is enough to notify, inform and assist the affected parties in fixing the problem.

Whatever course the researcher takes, she is exposing herself in the interest of bettering security for the public. A more comprehensive solution would be to more narrowly draw and more clearly set forth our computer offense laws. The goal is to leave breathing room for legitimate security research and give the researchers that help protect our digital property and privacy clear guidelines for their scientific and innovative activities. It is far better to allow security research to flourish in an atmosphere of light regulation, than to try to punish criminal attacks after they happen with draconian and confusing laws. In the meantime, however, security improvements will sometimes depend on the willingness of researchers to accept the risk of being sued.