‘‘I think we need to rethink the message process so that we are sending a series of increasingly inclusive messages,’’ Vakoch says. ‘‘Any message that we initially send would be too narrow, too incomplete. But that’s O.K. Instead, what we should be doing is thinking about how to make the next round of messages better and more inclusive. We ideally want a way to incorporate both technical expertise — people who have been thinking about these issues from a range of different disciplines — and also getting lay input. I think it’s often been one or the other. One way we can get lay input in a way that makes a difference in terms of message content is to survey people about what sorts of things they would want to say. It’s important to see what the general themes are that people would want to say and then translate those into a Lincos-like message.’’

When I asked Denning where she stands on the METI issue, she told me: ‘‘I have to answer that question with a question: Why are you asking me? Why should my opinion matter more than that of a 6-year-old girl in Namibia? We both have exactly the same amount at stake, arguably, she more than I, since the odds of being dead before any consequences of transmission occur are probably a bit higher for me, assuming she has access to clean water and decent health care and isn’t killed far too young in war.’’ She continued: ‘‘I think the METI debate may be one of those rare topics where scientific knowledge is highly relevant to the discussion, but its connection to obvious policy is tenuous at best, because in the final analysis, it’s all about how much risk the people of Earth are willing to tolerate. . . . And why exactly should astronomers, cosmologists, physicists, anthropologists, psychologists, sociologists, biologists, sci-fi authors or anyone else (in no particular order), get to decide what those tolerances should be?’’

Wrestling with the METI question suggests, to me at least, that the one invention human society needs is more conceptual than technological: We need to define a special class of decisions that potentially create extinction-level risk. New technologies (like superintelligent computers) or interventions (like METI) that pose even the slightest risk of causing human extinction would require some novel form of global oversight. And part of that process would entail establishing, as Denning suggests, some measure of risk tolerance on a planetary level. If we don’t, then by default the gamblers will always set the agenda, and the rest of us will have to live with the consequences of their wagers.

In 2017, the idea of global oversight on any issue, however existential the threat it poses, may sound naïve. It may also be that technologies have their own inevitability, and we can only rein them in for so long: If contact with aliens is technically possible, then someone, somewhere is going to do it soon enough. There is not a lot of historical precedent for humans voluntarily swearing off a new technological capability — or choosing not to make contact with another society — because of some threat that might not arrive for generations. But maybe it’s time that humans learned how to make that kind of choice. This turns out to be one of the surprising gifts of the METI debate, whichever side you happen to take. Thinking hard about what kinds of civilization we might be able to talk to ends up making us think even harder about what kind of civilization we want to be ourselves.

Near the end of my conversation with Frank Drake, I came back to the question of our increasingly quiet planet: all those inefficient radio and television signals giving way to the undetectable transmissions of the internet age. Maybe that’s the long-term argument for sending intentional messages, I suggested; even if it fails in our lifetime, we will have created a signal that might enable an interstellar connection thousands of years from now.

Drake leaned forward, nodding. ‘‘It raises a very interesting, nonscientific question, which is: Are extraterrestrial civilizations altruistic? Do they recognize this problem and establish a beacon for the benefit of the other folks out there? My answer is: I think it’s actually Darwinian; I think evolution favors altruistic societies. So my guess is yes. And that means there might be one powerful signal for each civilization.’’ Given the transit time across the universe, that signal might well outlast us as a species, in which case it might ultimately serve as a memorial as much as a message, like an interstellar version of the Great Pyramids: proof that a technologically advanced organism evolved on this planet, whatever that organism’s ultimate fate.

As I stared at Drake’s stained-glass Arecibo message, in the middle of that redwood grove, it seemed to me that an altruistic civilization — one that wanted to reach across the cosmos in peace — would be something to aspire to, despite the potential for risk. Do we want to be the sort of civilization that boards up the windows and pretends that no one is home, for fear of some unknown threat lurking in the dark sky? Or do we want to be a beacon?