Cantrell, Martin, and Ellis have presented their ideas in a provocative new paper called “Designing Autonomy: Opportunities for New Wildness in the Anthropocene.” To be clear, they’re not remotely saying that “it will ever be technologically, financially, or politically possible to develop and install autonomous wildness creators at meaningful scales.” They’re not even recommending it. “That’s not the direction I want to see us going,” says Cantrell. “The paper has a tongue-in-cheek aspect. We make this proposition and immediately pull back.”

So, then: why?

Because exploring hypothetical futures tell us a lot about the concerns of the present. That’s science-fiction in a nutshell. Ex Machina, System Shock, and Neuromancer aren’t how-to manuals; in their visions of robotic rebellion, they reflect our fears about our own fallibilities. So what happens when we speculate about AI going green instead of going rogue? That tells us something about how the ethical questions that pervade modern conservation, about how we see our role in protecting our remaining wilderness, and about what “wild” even means.

“When people try to maintain natural places, there’s a tendency to end up over-curating them,” says Ellis. “So even with the best intentions, everything ends up conforming to what human cultures decide is important.” For example, my colleague Ross Andersen recently wrote about an ambitious and possibly quixotic plan to re-wild the Siberian steppes with resurrected woolly mammoths. Those large beasts once roamed there, sure, but the architects of this plan have made a judgment call about what those now mammoth-less plains should be like. The same goes for the U.S.’s decision to reintroduce wolves to Yellowstone in the 1990s, or New Zealand’s plan to kill all rats on the island by 2050, or the starfish-murdering COTSBot. This is a perfect example of possible over-curation, says Ellis, because the crown-of-thorns starfish isn’t even an invasive species—it’s a native one that occasionally goes through population outbreaks. “The idea that you’re going to automatically kill a lot of animals in the name of “protecting nature” is a little disturbing,” he says.

“These interventions have been inherently controversial,” says Martin. “There’s already such an effort to present those decisions about which species get to live in a landscape and which do not as purely technical, when in reality, it’s very social and political.” Even when we’re trying to remove our influence, we’re stamping our humanity onto things.

But what if humans weren’t running the show? Artificial intelligence has progressed to a point where machines are capable of developing their own behavior, going beyond their original programs. When Google’s AlphaGo system recently beat the world’s best Go players, it did so with unconventional strategies, and moves that no human would ever have made. “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” said reigning human champion Ke Jie.