FAQ

What is this research about?

There have been many past works focused on the privacy implications of online advertising, and the tracking of user behaviors and properties that enable targeted advertising. This past work has, for example, studied how advertising networks track and target users, as well as what information they can learn about users in the process (e.g., their browsing history).

Our work focuses on a different actor in the broader tracking ecosystem: individuals who can -- with a modest budget -- purchase ads from advertising networks. We ask: what types of information can these individuals learn about users, and at what cost?

Why did you do this work?

We conducted this research to provide a new perspective to the advertising privacy debate. Though the fact that advertising networks are able to collect large amounts of data about users is a potential privacy concern (for example, if that data is breached or misused), such concerns are theoretical or abstract for most Internet users. By contrast, our work shows that arbitrary individuals -- who are not driven by the business aims or reputational concerns of a large advertising network -- can also access this personal data, and even narrowly target a particular person’s information. We believe that our findings provide an important additional perspective to the conversation around the privacy concerns of targeted advertising, and we hope to enable a broad public discussion among advertisers, consumers, and policymakers about how to prevent such risks.

What are the main findings?

Please see our full paper for the details of our results. A brief summary of some high-level findings are below.

A variety of advertising services are accessible to individuals with a modest budget ($1,000 or less) and a website.

Advertising can be used by the individuals buying ads to track a target’s location in relative real-time.

Advertising can be used by individuals buying ads to determine which apps a target uses and when, for apps with ads.

There are nuances in these results, and we encourage interested readers to read our full paper. We experimented with one advertising network, and then surveyed many more. Because we believe that our findings suggest a privacy risk that is industry-wide, we do not name the specific advertising network that we experimented with.

Why do you call your work “ADINT”?

Our term “ADINT” is inspired by the U.S. government’s naming scheme for different categories of intelligence gathering capabilities, including SIGINT (signals intelligence, like radio interception) and HUMINT (human intelligence, like espionage). We dub intelligence gathered through the advertising ecosystem, as a purchaser of the ads, as ADINT (advertising-based intelligence). ADINT is intelligence gathering through the purchasing of ads.

How does targeting people with ads work?

Ads can be targeted in a wide variety of ways, as described in our survey in our paper, including things about a person like age or gender, and things about the device, like operating system, IP address, location, or Mobile Advertising ID (MAID). Some of these are unique, like the MAID, but others are only semi-unique, like location or IP address, that might be shared by many individuals in some situations.

How can someone who buys ads learn what applications someone is using?

The advertising network that we experimented with reports the context an ad is served in -- which app the ad was served in. So whenever our ad is served to a target, we know which app they were using. This property is shared by all the other advertising networks that we surveyed.

How does location tracking work? How can ads be used to track someone’s location?

The first step to enable location tracking using ads is to obtain the target’s MAID by sniffing their network traffic (see below), which allows us to specify ads to only be served to the target device. Then we create a series of ads, each targeted at that MAID, but each also targeted at a different GPS location.This creates a geographical grid-like pattern of ads. Then we can observe which of these ads gets served, and this indicates where the target actually was.

This diagram illustrates the concept: the blue dots are individual ads targeted at different locations, the purple path is the actual path of the target through space, and the red dots are ads that are served.

There are limitations to this kind of attack. Those include: the target still has to use apps with ads; the accuracy of targeting is limited to about 8-meters; and the target has to remain in a location for about 5 minutes before an ad will be served for that location.

Does the target need to click on the ad?

No, not in order to be tracked by our techniques. When our ad is served (i.e., displayed) to the target, the ad network we used reports back to us about this fact, regardless of whether the target clicked on the ad.

What is a MAID and how is it obtained?

The Mobile Advertising ID (MAID) is a pseudorandom identifier to uniquely identify a particular device for advertising, similar to the way Tracking Cookies are used in browsers. Most ad networks that we surveyed allow ads to be targeted at specific MAIDs. We use this feature for many of the attacks to learn information by targeting ads at a specific target’s device.

One can learn a device’s MAID in a number of ways:

You can read it from the network traffic of that device, such as if the device is on unencrypted WiFi networks, on your WiFi network, or you can intercept their cellular data. This is the method we used in our experiments.

If the target ever clicks on your ad then the device MAID is automatically transmitted with the web request, much like a Tracking Cookie.

Some apps allow JavaScript in the ad to cause the device to transmit the MAID in a web request without user interaction.

What can someone learn without someone’s MAID?

This varies a lot depending on the situation. For example, if one knew that their neighbor lived alone, they could target their IP address or house’s location with ads and use that as a stand-in for the MAID. Doing this, one could find out information like whether they use specific dating or religious apps.

One could also do the inverse: creating a geographic grid of ads but instead of targeting a specific individual via their MAID, target a specific app. Now rather than showing the location of a particular individual (when we used the MAID, above) one would see a grid showing all the locations of that app’s users in the area. Example apps served by the advertising network that we experimented with include: Grindr, MeetMe, Talkatone, TextPlus, and Words with Friends. We did not actually test an attack of this type for ethical reasons, but our research indicates it is feasible.

Why has the advertising ecosystem created tools that allow these attacks?

There is a fundamental tension at work in the online advertising ecosystem: the precision targeting features we used for these attacks have been developed for legitimate business purposes. Advertisers are incentivized to provide more highly targeted ads, but each increase in targeting precision inherently increases ADINT capabilities.

What can users do? What can advertisers do?

Advertisers: We recommend that advertising networks do more vetting of the parties that wish to purchase ads. We further recommend that all advertising networks ensure that targeted ads are distributed to a minimum number of people and locations, thereby making it harder to track individuals or individual locations. Advertising networks may wish to consider preventing the delivery of ads from a single advertiser to the same person multiple times a day. Advertisers may also wish to provide less information to the purchasers of ads, e.g. not revealing in which apps the ads were displayed, or not presenting the actual time that the ads were displayed.

Users: Users concerned about the privacy risks we have identified in the course of our research should consider resetting their MAID. This is how to do so on an iPhone. This is how to do it on an Android phone. Users may also wish to turn off location access to apps on their phone. This how to do so on an iPhone. This how to do so on an Android phone.

Can you give examples of situations in which someone might use ADINT to learn private information about others?

The primary goal of our work was to understand this broad class of privacy risks. By studying these risks, the community can have an informed conversation about how to mitigate these risks in the future. We thus wish to focus most of this FAQ on the technical aspects of our discoveries. However, in Section 6 of the paper, we do consider example scenarios in which individuals or organizations might seek to leverage the capabilities that we surfaced in our research.

Is ADINT related to recent news about foreign advertising in the 2016 U.S. election?

Probably not. The current reports indicate the objective of the ads in question was simply to show messages about political topics to individuals. There is no indication that the ads were used to collect additional information about those targeted.

Using ads to collect additional information is what ADINT is about; very specific targeting of ads to parties you want to influence to vote a certain way or purchase certain products or services is simply a standard modern advertising practice.

What advertising network do your results pertain to?

Our results -- both our experiments with one advertising network and our survey of many others -- point to an an industry-wide issue. We therefore choose not to single out the specific advertising network through which we purchased our ads.

Different advertising networks will have different properties but, as noted above, the ability to deliver targeted ads to individuals (a key goal of targeted advertising), and then to obtain information when those ads are displayed, is what enables ADINT. Rather than focus on any particular advertising network, we wish to encourage broad discussions -- at an industry-wide level -- about the interactions between our findings, online advertising, and user privacy.

Paper For more details on our findings see our peer-reviewed technical paper published at the 2017 Workshop on Privacy in the Electronic Society.



This research was supported in part by the University of Washington Tech Policy Lab, the Short-Dooley Professorship and by NSF Award CNS-1463968.