When Norbert Wiener, the father of cybernetics, wrote his book The Human Use of Human Beings in 1950, vacuum tubes were still the primary electronic building blocks, and there were only a few actual computers in operation.

But he imagined the future we now contend with in impressive detail and with few clear mistakes. More than any other early philosopher of artificial intelligence, he recognized that AI would not just imitate—and replace—human beings in many intelligent activities but would change human beings in the process. “We are but whirlpools in a river of ever-flowing water,” he wrote. “We are not stuff that abides, but patterns that perpetuate themselves.”

When attractive opportunities abound, for instance, we are apt to be willing to pay a little and accept some small, even trivial cost of doing business for access to new powers. And pretty soon we become so dependent on our new tools that we lose the ability to thrive without them. Options become obligatory.

From "What Can We Do?" by Daniel C. Dennett. Adapted from Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2019 by John Brockman. Penguin Press

It’s an old, old story, with many well-known chapters in evolutionary history. Most mammals can synthesize their own vitamin C, but primates, having opted for a diet composed largely of fruit, lost the innate ability. The self-perpetuating patterns that we call human beings are now dependent on clothes, cooked food, vitamins, vaccinations, credit cards, smartphones, and the internet. And—tomorrow if not already today—AI.

Wiener foresaw several problems with this incipient state of affairs that Alan Turing and other early AI optimists largely overlooked. The real danger, he said, is

that such machines, though helpless by themselves, may be used by a human being or a block of human beings to increase their control over the rest of the race or that political leaders may attempt to control their populations by means not of machines themselves but through political techniques as narrow and indifferent to human possibility as if they had, in fact, been conceived mechanically.

Sure enough, these dangers are now pervasive.

In media, for instance, the innovations of digital audio and video let us pay a small price (in the eyes of audiophiles and film lovers) when we abandon analog formats, and in return provide easy—all too easy?—reproduction of recordings with almost perfect fidelity.

But there is a huge hidden cost. Orwell’s Ministry of Truth is now a practical possibility. AI techniques for creating all-but-undetectable forgeries of “recordings” of encounters are now becoming available, which will render obsolete the tools of investigation we have come to take for granted in the past 150 years.

Will we simply abandon the brief Age of Photographic Evidence and return to the earlier world in which human memory and trust provided the gold standard, or will we develop new techniques of defense and offense in the arms race of truth? (We can imagine a return to analog film-exposed-to-light, kept in “tamper-proof” systems until shown to juries, etc., but how long would it be before somebody figured out a way to infect such systems with doubt?

One of the disturbing lessons of recent experience is that the task of destroying a reputation for credibility is much less expensive than the task of protecting such a reputation. Wiener saw the phenomenon at its most general: “In the long run, there is no distinction between arming ourselves and arming our enemies.” The information age is also the disinformation age.

What can we do? A key phrase, it seems to me, is Wiener’s almost offhand observation, above, that “these machines” are “helpless by themselves.” As I have been arguing recently, we’re making tools, not colleagues, and the great danger is not appreciating the difference, which we should strive to accentuate, marking and defending it with political and legal innovations.