Several years ago, some wildlife researchers had concerns about the highly controversial trophy hunting of grizzly bears in British Columbia. The B.C. government said that the management and harvest of the big bears was founded on science. But previous work done by the researchers found that there were some question marks surrounding key grizzly information—basic details such as how many of the animals roamed B.C. and how many were killed each year by poachers. The uncertainty made it harder to know what was an appropriate level of bears to hunt.

Even so, the province increased the number of bears that could be harvested annually.

So was that government decision an anomaly? Or were other wildlife managers in the U.S. and Canada making similar calls based on similarly incomplete data? Those were questions researchers, including Kyle Artelle, a postdoctoral fellow at the University of Victoria, started asking. What does it mean, anyway, when agencies claim to abide by “science-based management?” Artelle wondered.

The answer? Science doesn't play as large a role in so-called "science-based" wildlife management as one might think, Artelle and his co-authors contend in a study published Wednesday in the journal Science Advances.

“[A]gencies and hunters often justify management approaches by claiming that they follow the so-called ‘North American Model of Wildlife Conservation,’ which has a central tenet that ‘Science is the proper tool to discharge policy,’” the authors wrote in a release. “This new research casts doubt on the extent to which this tenet is followed.”

The authors examined the public documents of 667 wildlife management systems across 62 U.S. states, territories, and Canadian provinces. These documents pertained to hunted species in both countries, from moose in Alaska to mule deer in Washington State to alligators in Florida.

The researchers looked in the plans for the presence of what they called the four hallmarks of science.

Measurable objectives (That is: Is there a clear, trackable goal?);

Evidence (hard data);

Transparency (an ability for the public to see the work);

Independent review (Did someone, including a third-party, check an agency’s work?)

Why look for these elements? They are the pillars of good scientific approach, says Artelle, who is also a biologist at B.C.’s Raincoast Conservation Foundation, an environmental group that uses science to further conservation objectives. “If you knock out any of them, the foundation is compromised.”

What researchers found—or more precisely, didn’t find—surprised them. “In most cases, 60 percent of cases, we found fewer than half of the criteria we were looking for,” Artelle says. “And we set the bar low. We tried to give easy A’s.”

That wasn’t the only deficit. Only 11 percent of wildlife systems explained how hunting quotas are set. This uncertainty is notable given that for many hunted animals, “adult mortality from hunting exceeds mortality from all other predators combined,” the study pointed out.

Fewer than 10 percent of the systems the study looked at reported that they undergo any form of review, even internally. Fewer than six percent subjected their systems to review by outside experts. “This deviates substantially from scientific processes,” the authors wrote.

And only 26 percent had measurable objectives—to say what defines success or a bad outcome.

“These (and other) findings raise doubts about whether North American wildlife management can accurately be described as science-based,” the authors concluded.

Artelle acknowledges that because the authors couldn’t find some information doesn’t mean that agencies don’t have it or don’t use it in their decision-making. (The authors asked agencies to fill in gaps about missing info, and sometimes they received it.) But this information needs to be shown to be part of the process, he says. “‘Trust us’ is fundamentally not how science works.”

Robert Garrott, director of the Fish and Wildlife Ecology and Management program at Montana State University in Bozeman, disagreed with the authors’ contention that more scientific rigor, though always welcome, would protect both wildlife and the agencies that oversee it from conflict—whether social, legal, or political. “The premise of this paper doesn’t reflect the reality of how wildlife is managed,” Garrott says. “Science does not dictate goals for wildlife. Society does that….It comes from the messy process of politics, because everybody owns [wildlife] and everybody has a say.” The authors seem to argue that “if we had more science, we wouldn’t have social and political conflict,” he says. “That isn’t how it works in North America.”

Artelle agrees with much of that critique. “We do not think that science alone should drive wildlife management,” he says. “Science can only tell us how the world works, it doesn’t tell use how it should work.”

The authors are most concerned about the cases where agencies claim that they’re using science in management decisions, but where in fact it’s just rhetoric, Artelle says. More scientific rigor could show where the science ends and the politics begin—be it lobbying by hunters, or ranchers, or animal-rights activists.