It is said that Mahatma Gandhi, when asked about Western civilization, remarked, “I think it would be a good idea.” That’s how I feel about intelligent life on Earth, especially when I think about the question of what truly intelligent life might look like elsewhere in the universe.

What do we mean by intelligence? Like life, it’s hard to define, but we need to if we want to search for it. Among the radio astronomers of SETI—the Search for Extraterrestrial Intelligence—it’s only sort-of a joke that the true hallmark of intelligent life is the creation of radio astronomy.

Modern SETI was born as the Cold War simmered. In late 1959 Giuseppe Cocconi and Philip Morrison published, in Nature, calculations showing that radio telescopes could transmit signals across interstellar distances. In 1960 Frank Drake decided to search, using the Green Bank Radio Observatory in West Virginia. He also led a workshop there, which produced the famous “Drake equation” for determining the number of broadcasting civilizations by taking into account the number of stars, the number of stars likely to have planets, etc. It was never meant to calculate a specific answer so much as to frame the discussion about how development of planets, life, and civilizations could affect the likelihood of finding anyone out there to talk to.

When you do the math, the answer depends most crucially on the factor Drake called L—the average longevity of a civilization. If L is small—say, less than 1,000 years—then the distance between civilizations is vast, and the chances of SETI succeeding are nil. But if L is large—say, millions of years—then the galaxy should be full of chattering sentience, some quite near.

Wondering whether other geek civilizations could survive for long periods is an excellent way for us to think, from a slightly different perspective, about our current problems. Given those precarious times, it made sense that SETI pioneers like Drake, Morrison, and Carl Sagan imagined that if L were short, it was because most civilizations might “blow themselves up” in a nuclear holocaust. Given our current anthropocene anxieties, present-day discussions about L often focus on the existential threat of climate change or resource exhaustion and the challenges of sustainability. But the linking, overarching question is: Can an advanced technological species develop a long-term stable relationship with world-changing technology?

In fact, I would argue that this makes a better operational definition of intelligence than the “radio intelligence” characterization given above. If you look at how we define intelligence here on Earth, it has to do with abilities like abstract thought, symbolic language, and problem-solving. Such a definition certainly qualifies individual humans (with honorable mentions going to several other terrestrial species). But what good is all this so-called intelligence if we can’t insure our civilization’s survival against the problems we’re creating with all of our technical cleverness—if we don’t have our act together as a global entity? We’re at least momentarily stuck in this weird stage we might call proto-intelligence.

It seems that this stage is inherently unstable. Right now we have global influence without global self-control. Given the relentless exponential rate of technological progress, this can’t last. This is why many sagacious people have described this coming century as a kind of bottleneck we must get through.

Technological societies elsewhere in the universe may all reach a similar critical juncture. Those that make it through will emerge as a different kind of entity. They would have graduated from proto-intelligence by somehow developing an ability to act as a global unit, thereby avoiding self-made disasters. And they would have unimaginably powerful technology at their disposal, so natural disasters like asteroid strikes and climate fluctuations would also become avoidable. They might be multiplanetary and ultimately interstellar. They could survive for billions of years. Given the math of Drake-type calculations, such quasi-immortal societies, if they exist at all, would dominate the observable civilizations in the universe.

Just as the Gaia hypothesis defines life as an inherently planetary property that cannot be understood through studying individual organisms, we can perhaps conceive of a kind of “planetary intelligence,” a global attribute that develops far beyond our current chaotic self-conflicted stage and becomes a long-term stabilizing influence on a planet. The Gaia hypothesis, though somewhat controversial, has been incredibly fruitful in generating a “systems science” perspective on the co-evolution of planets and life. Similarly, the concept of a transition to planetary intelligence might rub some people the wrong way—but I think it could be useful.

For one thing, it might give us another avenue for pursuing SETI. The “radio intelligence” definition has been justified because it is pragmatic—it gives us something specific to search for. But now we are at the beginning of the exoplanet revolution, as we start to scrutinize the planets we are discovering orbiting nearly every star. And we should keep one eye, or one spectrometer, open for signs of worlds that do not seem “natural” whether or not they are blasting “galactic public radio” shows in our direction.

If planets can reach a long-lived stage of world-changing intelligence, then we might want to think about what that would look like. Our planet, 4.5 billion years old, is roughly halfway through its life. (Those recent estimates that our planet has only 1.75 billion years of habitability left do not allow for the possibility that we will develop planetary manipulation. How hard could that be?) What if the current troubles of our civilization are really the growth pains marking the beginning of a transformation to planetary intelligence?

Part of the point of SETI has always been a search for answers about our own cosmic potential and destiny. If they are out there, it means that there may be hope for us. It means there is a solution to this puzzle of forging a healthy, long-term relationship between a planet and a technological civilization.

Why should we consider defining intelligence as something global and as something that hasn’t actually yet appeared on Earth? It may be useful for envisioning the future of our own civilization and any others that may be out there among the stars. It might give us something to strive for.

More from Slate’s series on the future of exploration: Is the ocean the real final frontier, or is manned sea exploration dead? Why are the best meteorites found in Antarctica? Can humans reproduce on interstellar journeys? Why are we still looking for Atlantis? Why do we celebrate the discovery of new species but keep destroying their homes? Who will win the race to claim the melting Arctic—conservationists or profiteers? Why don’t travelers ditch Yelp and Google in favor of wandering? What can exploring Google’s Ngram Viewer teach us about history? Why did some of America’s best scientific minds gather in 1961 to discuss extraterrestrial life?