There’s a funny catch-22 when it comes to privacy best practices. The very techniques that experts recommend to protect your privacy from government and commercial tracking could be at odds with the antiquated, vague Computer Fraud and Abuse Act (CFAA).

A number of researchers (including me) recently joined an amicus brief (filed by Stanford’s Center for Internet and Society in the “Weev” case), arguing how security and privacy researchers are put at risk by this law.

However, I’d also like to make the case here that the CFAA is bad privacy policy for* consumers, *too. It’s not just something that affects hackers and academics.

The crux of a CFAA violation hinges on whether or not an action allows a user to gain “access without authorization” or “exceed authorized access” to a computer. The scary part, therefore, is when these actions involve everyday behaviors like clearing cookies, changing browser reporting, using VPNs, and even protecting one’s mobile phone from being identified.

The Conveniences and Perils of Cookies ————————————–

By the latest count, users encounter 351 different third-party trackers when visiting popular sites like the Huffington Post, according to a recent study. The companies responsible for much of this tracking have repeatedly refused to honor user preferences, and private tracking technology is increasingly sophisticated at circumventing blocking tools.

Clearing cookies limits the profiles advertisers can compile, essentially rendering us as a new user to web services. In fact, the FTC recommends that users clear cookies to protect their private information, and the Treasury Department advises the same – though in that case it’s to make sure their website is loading correctly for users.

However, many websites rely on cookies to enforce paywalls. These companies do this so their freemium business models can work transparently, without initially requiring the user to be aware (i.e., log in) until they hit the limit.

The New York Times, for example, imposes a 10 articles-a-month limit for non-subscribers, allowing users to browse 10 articles for free but then requiring payment for subsequent use. But the method the *New York Times *and other publications use to identify users is unreliable and easy to circumvent, even inadvertently. Clearing one’s cookies periodically – or even using a browser’s private browsing mode – bypasses the flimsy paywalls and allows users to continue reading stories.

#### Ashkan Soltani ##### About [Ashkan Soltani](http://ashkansoltani.org/) is a researcher and consultant focused on privacy, security, and behavioral economics who served as staff technologist in the FTC's Division of Privacy and Identity Protection, and who has testified in front of the Senate Commerce and Judiciary Committees. Soltani was also the primary technical consultant on *Wall Street Journal*’s “What They Know” investigative series on internet privacy.

Under an unsophisticated judge’s take, this act could be interpreted as exceeding “authorized access” (of 10 free articles a month) – and is therefore a potential, prosecutable violation of the CFAA.

Beware Agent User —————–

Beyond cookies, browser fingerprinting is another technique used to uniquely identify users as they browse. The technique essentially works by fingerprinting the combination of which browser a person is using (“user agent”), his/her IP address, and a variety of configuration settings. It’s not unlike human fingerprinting.

Changing one’s user agent by altering the browser name that one’s browser reports to the website therefore reduces the accuracy of the fingerprint, making the tracking technique less reliable. However, altering our user agents is potentially problematic because some businesses personalize content and prices based on them.

For example, some websites offer a different class of service on Gogo Inflight Wi-Fi, or different hotel prices and packages for Mac customers (Safari) vs. PC customers (Internet Explorer) – remember when Orbitz got caught doing this?

Yet changing the user agent – which prevents the above discrimination and tracking – could conceivably exceed the authorized access intended for an individual’s computer device, thus violating the CFAA. Another privacy best practice bites the dust.

And VPNs Too ————

Virtual Private Network (VPN) services, which allow users to mask their true IP addresses by routing traffic via endpoints – sometimes in other countries – are basically private tunnels for users to route their traffic. Many companies restrict employee access to internal systems while not physically in the office to VPN only.

Using a VPN ensures the data people are sending and receiving is encrypted and safe from prying eyes – such as eavesdropping hackers at Wi-Fi cafes, enterprising ISPs performing deep packet inspection for marketing purposes, or even governments looking to surveil traffic. (Dissidents for example would need to mask their locations in order to circumvent government blocks or surveillance when accessing sensitive content or services.)

However, VPN services also allow users to circumvent geographic restrictions that businesses put on their products. That’s why the third largest Internet provider in New Zealand just added a service that masks computer location, so their users could access international content that otherwise wouldn’t be available because some businesses only offer content in geographies they’re able to monetize. Video streaming services like Hulu and the BBC for example rely on users’ IP addresses to restrict access to video content.

Using a VPN for privacy reasons could therefore result in inadvertently circumventing the (mediocre) methods video services use to restrict content access. It’s yet another way that protecting oneself could mean violating the law.

When Mobile Tracks Your Every Move ———————————-

Nearly every device – not just computers but tablets and smartphones too – has a radio that allows it to communicate with the local wireless hotspot. To identify the device uniquely, this radio has a unique serial number known as its MAC address.

Recently, a variety of services have sprouted up that surreptitiously monitor this unique serial number to track us as we move about our day – for example, to follow shoppers through malls to better profile their shopping behaviors. While some of these services provide an opt-out, consumers would have to first know about the service and* then* find the opt-out page: difficult for something that occurs without our knowledge. (And not surprisingly, when stores do tell consumers they’re tracking them, the information is not well received.)

A potentially more effective way to thwart this form of tracking is to change the MAC address serial number on a regular basis using apps such as MacChanger. These apps act like clearing cookies on one’s browser, effectively severing links to past history (i.e., previous shopping activities).

However, many public wireless hotspots – like those in airports or coffee shops – rely on the MAC serial number to limit access to certain users. Changing our MAC addresses to protect privacy could therefore land us in violation of the CFAA because it allows us to “exceed authorized access” to a network protected by a system that relies on the persistence of these identifiers to exclude us.

It’s a system that won’t let us out.

***

We now know how revealing – and huge the amount! – of information we generate is, as we browse the internet or use our phones or just go about our daily lives. The recent revelations about the extent of government surveillance have also made clear that privacy best practices are no longer an academic or tech-savvy fringe enterprise.

While cases like Weev’s draw attention to the limitations of the CFAA and our amicus brief argues it could curb necessary research, it’s also important to recognize that the laws that aim to protect us actually limit the valid tools consumers need to protect themselves online.

As it is, consumers have only limited recourse in minimizing their digital footprints. Yet the CFAA’s vague language of “authorized access” creates an environment where using technical tools to keep our behavior and content private could violate the law if we access the wrong site. Most users probably don’t have any malicious intent when using any of the above technical tools, and the methods to restrict access to those sites are often very vulnerable to basic workarounds.

It seems rather ironic to hold users accountable for the fact that the techniques used to limit or exclude their access are not very sophisticated – and are, arguably, dumb.

Editor: Sonal Chokshi @smc90