Most of these failures don’t seem like failures, because users have so internalized their methods that they apologize for them in advance. The best defense against instability is to rationalize uncertainty as intentional—and even desirable.

* * *

The common response to precarious technology is to add even more technology to solve the problems caused by earlier technology. Are the toilets flushing too often? Revise the sensor hardware. Is online news full of falsehoods? Add machine-learning AI to separate the wheat from the chaff. Are retail product catalogs overwhelming and confusing? Add content filtering to show only the most relevant or applicable results.

But why would new technology reduce rather than increase the feeling of precarity? The more technology multiplies, the more it amplifies instability. Things already don’t quite do what they claim. The fixes just make things worse. And so, ordinary devices aren’t likely to feel more workable and functional as technology marches forward. If anything, they are likely to become even less so.

Technology’s role has begun to shift, from serving human users to pushing them out of the way so that the technologized world can service its own ends. And so, with increasing frequency, technology will exist not to serve human goals, but to facilitate its own expansion.

This might seem like a crazy thing to say. What other purpose do toilets serve than to speed away human waste? No matter its ostensible function, precarious technology separates human actors from the accomplishment of their actions. They acclimate people to the idea that devices are not really there for them, but as means to accomplish those devices own, secret goals.

This truth has been obvious for some time. Facebook and Google, so the saying goes, make their users into their products—the real customer is the advertiser or data speculator preying on the information generated by the companies’ free services. But things are bound to get even weirder than that. When automobiles drive themselves, for example, their human passengers will not become masters of a new form of urban freedom, but rather a fuel to drive the expansion of connected cities, in order to spread further the gospel of computerized automation. If artificial intelligence ends up running the news, it will not do so in order to improve citizen’s access to information necessary to make choices in a democracy, but to further cement the supremacy of machine automation over human editorial in establishing what is relevant.

There is a dream of computer technology’s end, in which machines become powerful enough that human consciousness can be uploaded into them, facilitating immortality. And there is a corresponding nightmare in which the evil robot of a forthcoming, computerized mesh overpowers and destroys human civilization. But there is also a weirder, more ordinary, and more likely future—and it is the one most similar to the present. In that future, technology’s and humanity’s goals split from one another, even as the latter seems ever more yoked to the former. Like people ignorant of the plight of ants, and like ants incapable of understanding the goals of the humans who loom over them, so technology is becoming a force that surrounds humans, that intersects with humans, that makes use of humans—but not necessarily in the service of human ends. It won’t take a computational singularity for humans to cede their lives to the world of machines. They’ve already been doing so, for years, without even noticing.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.