Apple and Google are undertaking an unprecedented team effort to build a system for Androids and iPhones to interoperate in the name of technology-assisted COVID-19 contact tracing.

The companies’ plan is part of a torrent of proposals to use Bluetooth signal strength to enhance manual contact tracing with proximity-based mobile apps. As Apple and Google are an effective duopoly in the mobile operating system space, their plan carries special weight. Apple and Google’s tech would be largely decentralized, keeping most of the data on users’ phones and away from central databases. This kind of app has some unavoidable privacy tradeoffs, as we’ll discuss below, and Apple and Google could do more to prevent privacy leaks. Still, their model is engineered to reduce the privacy risks of Bluetooth proximity tracking, and it’s preferable to other strategies that depend on a central server.

Proximity tracking apps might be, at most, a small part of a larger public health response to COVID-19. This use of Bluetooth technology is unproven and untested, and it’s designed for use in smartphone apps that won’t reach everyone. The apps built on top of Apple and Google’s new system will not be a “magic bullet” technosolution to the current state of shelter-in-place. Their effectiveness will rely on numerous tradeoffs and sufficient trust for widespread public adoption. Insufficient privacy protections will reduce that trust and thus undermine the apps’ efficacy.

How Will It Work?

As soon as today, Apple and Google are beginning to roll out parts of the iPhone and Android infrastructure that developers need to be able to build Bluetooth-based proximity tracking apps. If you download one of these apps, it will use your phone’s Bluetooth chip to do what Bluetooth does: emit little radio pings to find other devices. Usually, these pings are looking for your external speakers or wireless mouse. In the case of COVID-19 proximity tracking apps, they will be reaching out to nearby people who have also opted into using Bluetooth for this purpose. Their phones will also be emitting and listening for those pings. The apps will use Bluetooth signal strength to estimate the distance between the two phones. If they are sufficiently close—6 feet or closer, based on current CDC guidance—both will log a contact event.

There are now many different proposals to do basically this same thing, with slightly different considerations for efficiency, security, and privacy. The rest of this post looks at Apple and Google’s proposal (version 1.1) in particular.

Each phone will generate a new special-purpose private key each day, known as a “temporary exposure key.” It will then use that key to generate random identification numbers called “rolling proximity identifiers” (RPIDs). Pings will go out at least once every five minutes when Bluetooth is enabled. Each ping will contain the phone’s current RPID, which will change every 10 to 20 minutes. This is meant to reduce the risk that third-party trackers can use the pings to passively track people’s locations. The operating system will save all of its temporary exposure keys, and log all the RPIDs it comes into contact with, for the past 2 weeks.



Proximity tracking apps might be, at most, a small part of a larger public health response to COVID-19.

If an app user learns they are infected, they can grant a public health authority permission to publicly share their temporary exposure keys. In order to prevent people from flooding the system with false alarms, health authorities need to verify that the user is actually infected before they may upload their keys. After they are uploaded, a user’s temporary exposure keys are known as “diagnosis keys.” The diagnosis keys are stored in a public registry and available to everyone else who uses the app.

The diagnosis keys contain all the information needed to re-generate the full set of RPIDs associated with each infected user’s device. Participating apps can use the registry to compare the RPIDs a user has been in contact with against the RPIDs of confirmed COVID-19 carriers. If the app finds a match, the user gets a notification of their risk of infection.

The program will roll out in two phases. In phase 1, Google and Apple are building a new API into their respective platforms. This API will contain the bare-bones functionality necessary to make their proximity-tracing scheme work on both iPhones and Androids. Other developers will have to build apps that actually execute the new API. Draft specifications for the API have already been published, and it could be available for developers to use this week. In phase 2, the companies say that proximity tracking “will be introduced at the operating system level to help ensure broad adoption.” We know a lot less about this second phase.

Will It Work?

Several technical and social challenges stand in the way of automated proximity tracking. First, these apps assume that “cell phone = human.” But even in the U.S., cell phone adoption is far from universal. Elderly people and low-income households are less likely to own smartphones, which could leave out many people at the highest risk for COVID-19. Many older phones won’t have the technology necessary for Bluetooth proximity tracking. Phones can be turned off, left at home, run out of battery, or be set to airplane mode. So even a proximity tracking system with near-universal adoption is going to miss millions of contacts each day.

These apps assume that “cell phone = human," but cell phone adoption is far from universal.

Second, proximity tracking apps have to make the profound leap from “there is a strong Bluetooth signal near me” to “two humans are experiencing an epidemiologically relevant contact.” Bluetooth technology was not made for this. An app may log a connection when two people wearing masks briefly pass each other on a windy sidewalk, or when two cars with windows up sit next to each other in traffic. The proximity of a patient to a nurse in full PPE may look the same to Bluetooth as the proximity of two people kissing. Also, Bluetooth can be disrupted by large concentrations of water, like the human body. In some situations, although two people may be close enough to touch, their phones may not be able to establish radio contact. Accurately estimating the distance between two devices is even more difficult.

Third, Apple and Google’s proposal currently specifies that phones will broadcast signals as seldom as once every five minutes. So even under otherwise optimal conditions, two phones may not log a contact until they’ve been near each other for the requisite amount of time.

Fourth, a significant portion of the population must actually use the apps. In Singapore, a government-developed app has only achieved about 20% adoption after several weeks. As a mobile platform duopoly, Apple and Google are in perhaps the best position possible to encourage the deployment of a new piece of software at scale. Even so, adoption may be slow, and it will never be universal.

Will It Be Private and Secure?

The truth is, nobody really knows how effective proximity tracking apps will be. Further, we need to weigh the potential benefits against the very real risks to privacy and security.

First, any proximity tracking system that checks a public database of diagnosis keys against RPIDs on a user’s device—as the Apple-Google proposal does—leaves open the possibility that the contacts of an infected person will figure out which of the people they encountered is infected. For example, if you have a contact with a friend, and your friend reports that they are infected, you could use your own device’s contact log to learn that they are sick. Taken to an extreme, bad actors could collect RPIDs en masse, connect them to identities using face recognition or other tech, and create a database of who’s infected. Other proposals, like the EU’s PEPP-PT and France and Germany’s ROBERT, purport to prevent this kind of attack, or at least make it more difficult, by performing matching on a central server; but this introduces more serious risks to privacy.

Second, Apple and Google’s choice to have infected users publicly share their once-per-day diagnosis keys—instead of just their every-few-minute RPIDs—exposes those people to linkage attacks. A well-resourced adversary could collect RPIDs from many different places at once by setting up static Bluetooth beacons in public places, or by convincing thousands of users to install an app. The tracker will receive a firehose of RPIDs at different times and places. With just the RPIDs, the tracker has no way of linking its observations together.

If a bad actor were to set up a Bluetooth beacon or use an app to collect the location of people’s RPIDs, all they would get is a map like this: lots of different pings, but no indication of which pings belong to which individual.

But once a user uploads their daily diagnosis keys to the public registry, the tracker can use them to link together all of that person’s RPIDs from a single day.

If someone uploads their daily diagnosis keys to a central server, a bad actor could then use those keys to link together multiple RPID pings. This can expose their daily routine, such as where they live and work.

This can create a map of the user’s daily routine, including where they work, live, and spend time. Such maps are highly unique to each person, so they could be used to identify the person behind the uploaded diagnosis key. Furthermore, they can reveal a person’s home address, place of employment, and trips to sensitive locations like a church, an abortion clinic, a gay bar, or a substance abuse support group. The risk of location tracking is not unique to Bluetooth apps, and actors with the resources to pull off an attack like this likely have other ways of acquiring similar information from cell towers or third-party data brokers. But the risks associated with Bluetooth proximity tracking in particular should be reduced wherever possible.

This risk can be mitigated by shortening the time that a single diagnosis key is used to generate RPIDs, at the cost of increasing the download size of the exposure database. Similar projects, like MIT’s PACT, propose using hourly keys instead of daily keys.

Third, police may seek data created by proximity apps. Each user’s phone will store a log of their physical proximity to the phones of other people, and thus of their intimate and expressive associations with some of those people, for several weeks. Anyone who has access to the proximity app data from two users’ phones will be able to see whether, and on what days, they have logged contacts with each other. This risk is likely inherent to any proximity tracking protocol. It should be mitigated by giving users the option to selectively turn off the app and delete proximity data from certain time periods. Like many other privacy threats, it should also be mitigated with strong encryption and passwords.

Apple and Google’s protocol may be susceptible to other kinds of attacks. For example, there’s currently no way to verify that the device sending an RPID is actually the one that generated it, so trolls could collect RPIDs from others and rebroadcast them as their own. Imagine a network of Bluetooth beacons set up on busy street corners that rebroadcast all the RPIDs they observe. Anyone who passes by a “bad” beacon would log the RPIDs of everyone else who was near any one of the beacons. This would lead to a lot of false positives, which might undermine public trust in proximity tracing apps—or worse, in the public health system as a whole.

What Should App Developers Do?

Apple and Google’s phase 1 is an API, which leaves it to the rest of the world to develop the actual apps that use the new API. Google and Apple have said they intend “public health authorities” to make apps. But most health authorities won’t have the in-house technical resources to do that, so it’s likely they will partner with private companies. Anyone who builds an app on top of the interface will have to do a lot of things right to make sure it’s private and secure.

Bad-faith app developers may try to tear down the tech giants’ carefully constructed privacy guarantees. For example, although a user’s data is supposed to stay on their device, an app with access to the API might be able to upload everything to a remote server. It could then link daily private keys to a mobile ad ID or other identifier, and exploit users’ association history to profile them. It could also use the app as a “Trojan horse” to convince users to agree to a whole suite of more invasive tracking.

So, what’s a responsible app developer to do? For starters, they should respect the protocol they’re building on. Developers shouldn’t try to graft a more “centralized” protocol, which shares more data with a central authority, on top of Apple and Google’s more “decentralized” model that keeps users’ data on their devices. Also, developers shouldn’t share any data over the Internet beyond what is absolutely necessary: just uploading diagnosis keys when an infected user chooses to do so.

Developers should be extremely up-front with their users about what data the app is collecting and how to stop it. Users should be able to stop and start sharing RPIDs at any time. They also should be able to see the list of the RPIDs they’ve received, and delete some or all of that contact history.

The whole system depends on trust.

Equally important is what not to do. This is a public health crisis, not a chance to grow a startup. Developers should not force users to sign up for an account for anything. Also, they shouldn’t ship a contact tracing app with extra, unnecessary features. The app should do its job and get out of the way, not try to onboard users to a new service.

Obviously, proximity tracing apps shouldn’t have anything to do with ads (and the exploitative, data-sucking mess that comes with them). Likewise, they shouldn’t use analytics libraries that share data with third parties. In general, developers should use strong, transparent technical and policy safeguards to wall this data off to COVID-19 purposes and only COVID-19 purposes.

The whole system depends on trust. If users don’t trust that an app is working in their best interests, they will not use it. So developers need to be as transparent as possible about how their apps work and what risks are involved. They should publish source code and documentation so that tech-savvy users and independent technologists can check their work. And they should invite security audits and penetration testing from professionals to be as confident as possible that their apps actually do what they say they will.

All of this will take time. There’s a lot that can go wrong, and too much is at stake to afford rushed, sloppy software. Public health authorities and developers should take a step back and make sure they get things right. And users should be wary of any apps that ship out in the days following Apple and Google’s first API release.

What Should Apple and Google Do?

Apple and Google should be transparent about exactly what their criteria are.

During the first phase, Apple and Google have said that the API can “only [be] used for contact tracing by public health authorities apps,” which “will receive approval based on a specific set of criteria designed to ensure they are only administered in conjunction with public health authorities, meet our privacy requirements, and protect user data.” Apple and Google should be transparent and specific about exactly what these criteria are. Through these criteria, the companies can control what other permissions apps have. For example, they could prevent COVID-19 proximity tracking apps from accessing mobile ad IDs or other device identifiers. They could also make more detailed policy prescriptions, like requiring that any app using the API have a clear mechanism for users to go back and delete parts of their contact log. Apple and Google’s app store approval criteria and related restrictions must also be evenly applied; if Apple and Google make exceptions for governments or companies that they are friendly with, they would undermine the trust necessary for informed consent.

In the second phase, the companies will build the proximity tracking technology directly into Android and iOS. This means that no app will be needed initially, though Apple and Google propose that the user be prompted to download an public health app if an exposure match is detected. All of the recommendations for app developers above also apply to Apple and Google here. Critically, the promised opt-in must obtain specific, informed consent from each user before activating any kind of proximity tracking. They need to make it easy for users who opt in to later opt out, and to view and delete the data that the device has collected. They should create strong technical barriers between the data collected for proximity tracking and everything else. And they should open-source their implementations so that independent security analysts can check their work.

This program must sunset when the COVID-19 crisis is over.

Finally, this program must sunset when the COVID-19 crisis is over. Proximity tracking apps should not be repurposed for other things, like tracking more mild seasonal flu outbreaks or finding witnesses to a crime. Google and Apple have said that they “can disable the exposure notification system on a regional basis when it is no longer needed.” This is an important ability, and Apple and Google should establish a clear, concrete plan for when to end this program and removing the APIs from their operating systems. They should publicly state how they will define “the end of the crisis,” including what criteria they will look for, and which public health authorities will guide them.

There will be no quick tech solution to COVID-19. No app will let us return to business as usual. App-assisted contact tracing will have serious limitations, and we don’t yet know the scope of the benefits. If Apple and Google are going to spearhead this grand social experiment, they must do it in a way that keeps privacy risks to an absolute minimum. And if they want it to succeed, they must earn and keep the public’s trust.