In our drones report, we discuss the coming onslaught of domestic drones and the weak state of the privacy laws that should protect us, and we outline our recommendations for protections that Congress and local governments should put in place.

But if nothing is done, how might things go? Let’s take a look at how police drone use could unfold:

The FAA’s new rules go into effect. Acting under orders from Congress, the FAA in coming months and years will significantly loosen the regulations that have been holding back broader deployment of drones. Starting later this year, for example, the FAA must allow any “government public safety agency” to operate any small drone (under 4.4 pounds) as long as certain conditions are met.

More and more police departments begin using them. The FAA’s new rules allow for the release of pent-up demand among police departments for cheap aerial surveillance. Ownership of drones quickly becomes common among departments large and small. Organizations are formed by police drone operators, who exchange tips and advice. We also begin to hear about their deployment by federal agencies, other than on the border.

We start to hear stories about how they’re being used. Most departments and agencies are relatively careful at first, and we begin to hear stories about drones being put to use in specific, mostly unobjectionable police operations such as raids, chases, and searches supported by warrants.

Drone use broadens. Fairly quickly, however, we begin to hear about a few departments deploying drones for broader, more general uses: drug surveillance, marches and rallies, and generalized monitoring of troubled neighborhoods.

Private use is banned. A terrorist like the pilot who crashed his plane into an IRS building in Texas uses an explosives-laden drone to try to attack a public facility. In response, the government clamps down on private use of the technology. The net result is that the government can use it for surveillance but individuals cannot use it to watch the government.

Drones become able to mutually coordinate. Multiple drones deployed over neighborhoods can be linked together, and communicate and coordinate with each other (see this video for an early taste of what that could look like). This allows a swarm of craft to form a single, distributed wide-area surveillance system such as that envisioned by the “Gorgon Stare” program.

The analytics gets better. At the same time, drones and the computers behind them become more intelligent and capable of analyzing the video feeds they are generating. They gain the ability to automatically track multiple vehicles and bodies as they move around a city or town, with different drones handing off the tracking to each other just as a mobile phone network passes a signal from one cell to another as a user rides down the highway.

Flight durations grow. Technology improvements (involving blimps, perhaps, or solar-power innovations) allow for drones to stay aloft for longer periods more cheaply, which becomes key in permitting their use for persistent surveillance.

The cycle accelerates. The advancing technology incentivizes agencies to buy even more drones, which in turn spurs more technology development, and the cycle becomes self-perpetuating.

Laws are further loosened. As drones get smarter and more reliable and very good at sensing and avoiding other aircraft, FAA restrictions are further loosened, permitting even autonomous flight.

Pervasive tracking becomes common. Despite opposition, a few police departments begin deploying drones 24/7 over certain areas. The media covers the controversy but Congress takes no action, and eventually it becomes old news, and the practice spreads until many or most American towns and cities are subject to the practice.

Technologies are combined. Drone video cameras and tracking analytics are combined or synched up with other technologies such as face recognition, gait recognition, license-plate scanners, and cell phone location data.

The data is mined. With individuals’ comings and goings routinely monitored, databases are able build up records of where people live, work, and play—what friends they visit, bars they drink at, doctors they visit, what houses of worship, or political events, or sexually oriented establishments they go to—and who else is at those places at the same time. Computers comb through this data looking for “suspicious patterns,” and when the algorithms kick up an alarm, the person involved becomes the subject of much more extensive surveillance.

Ultimately, such surveillance leads to an oppressive atmosphere where people learn to think twice about everything they do, knowing that it will be recorded, charted, scrutinized by increasingly intelligent computers, and possibly used to target them.

I’m not sure how realistic this scenario is. Perhaps it is far-fetched (I hope so). But the questions to ask are: which of the above steps is unlikely to take place, and why? And if we don’t end up in the situation described, how close will we get?