Right now, people are willing share data for the free stuff they get on the web. Partly, that's because the stuff on the web is awesome. And partly, that's because people don't know what's happening on the web. When they visit a website, they don't really understand that a few dozen companies may collect data on that visit.

The traditional model of how this works says that your information is something like a currency and when you visit a website that collects data on you for one reason or another, you enter into a contract with that site. As long as the site gives you "notice" that data collection occurs -- usually via a privacy policy located through a link at the bottom of the page -- and you give "consent" by continuing to use the site, then no harm has been done. No matter how much data a site collects, if all they do is use it to show you advertising they hope is more relevant to you, then they've done nothing wrong.

It's a free market kind of thing. You are a consumer of Internet pages and you are free to go from one place to another, picking and choosing among the purveyors of information. Nevermind that if you actually read all the privacy policies you encounter in a year, it would take 76 work days. And that calculation doesn't even account for all the 3rd parties that drain data from your visits to other websites.

Even more to the point: there is no obvious way to discriminate between two separate webpages on the basis of their data collection policies. While tools have emerged to tell you how many data trackers are being deployed at any site at a given moment, the dynamic nature of Internet advertising means that it is nearly impossible to know the story through time. As I explained in a previous post, advertising space can be sold and resold many times. At each juncture, the new buyer has to have some information about the visit. Ads can be sold by geography or probable demographic indicators, too, so there may be many, many companies that are involved with some of the data on an individual site.

I asked Evidon, the makers of a track-the-trackers tool called Ghostery, to see how many data trackers ran during the past month on four news websites and my home here, The Atlantic. The numbers were astonishing. The Drudge Report and Huffington Post both ran over 200 trackers. The New York Times ran 146 and The Wall Street Journal 99. We deployed a 48. Of course, these are just the numbers: data tracking firms are invasive in different ways, so it could be possible that our 48 tracking tools collect just as much data as Drudge's 205. Even if the sheer numbers seem to indicate that something different in degree is happening at Drudge and Huffington Post than at our site, I couldn't tell you for sure that was the case.

How can anyone make a reasonable determination of how their information might be used when there are more than 50 or 100 or 200 tools in play on a single website in a single month? "I think the biggest challenge we have right now is figuring out a way to educate the average user in a way that's reasonable," Evidon's Andy Kahl told me. Some people talk about something like a nutritional label for data policies. Others, like Stanford's Ryan Calo, talk about "visceral notice."