Adam Russell, an anthropologist and program manager at the Department of Defense’s mad-science division Darpa, laughs at the suggestion that he is trying to build a real, live, bullshit detector. But he doesn’t really seem to think it’s funny. The quite serious call for proposals Russell just sent out on Darpa stationery asks people—anyone! Even you!—for ways to determine what findings from the social and behavioral sciences are actually, you know, true. Or in his construction: “credible.”

Even for Darpa, that’s a big ask. The DoD has plenty of good reasons to want to know what social science to believe. But plenty more is at stake here. Darpa’s asking for a system that can solve one of the most urgent philosophical problems of our time: How do you know what’s true when science, the news, and social media all struggle with errors, advertising, propaganda, and lies?

Take a scientific claim. Do some kind of operation on it. Determine whether the claim is right enough to act on. So ... a bullshit detector?

“I wouldn’t characterize it that way, and I think it’s important not to,” Russell says. He doesn’t want to contribute to cynicism that lets people think if scientists admit uncertainty, that means they can’t be trusted. “I have a deep faith that there is real science. It’s not that we know nothing about the world.” Science is still the best way of knowing stuff. Darpa just wants to know what stuff science is really sure about, and how it knows it. And how it knows it knows it.

You can imagine why Darpa and the DoD might want to shore up the social sciences. They want to understand how collective identity works, or why some groups (and nations) are stable and some fall apart. The military would like to get a better handle on how humans team up with machines before the machines get smarter and more get deployed. How does radicalization work, especially online? Why do people cooperate sometimes and compete at others? All these questions have two things in common: They are super-important to national security, and no one knows the answer.

The people who are supposed to figure out those knotty issues out have their own problems. You might have heard about the “reproducibility crisis,” the concern that many scientific findings, particularly in psychology and sociology, don’t pass a fundamental test of validity—that subsequent researchers can do the same experiment and get the same results as the first ones. Or you might be familiar with “P-hacking” and other ways some researchers, under pressure to publish and get grants, cherry-pick their experimental results to ensure the appearance of statistical significance.

This is not about whether any one particular claim can be replicated, right? It’s that collectively the claims don’t make sense. Duncan Watts, Microsoft Research

Those issues come up in Darpa’s call for proposals, but researchers acknowledge that the concerns don’t end there. “If you ask a bunch of social scientists how organizations work, you’re not just going to get 20 different answers. You’re going to get answers not even comparable to each other,” says Duncan Watts, a sociologist at Microsoft Research who wrote a blistering critique of the social sciences’ (as he terms it) incoherency problem in the January 2017 issue of Nature Human Behavior. “You read one paper and then another paper, and it’s got the same words in the title but different units of analysis, different theoretical constructs, entirely different notions of causality. By the time you’ve done a literature review, you’re completely confused about what on Earth you even think. This is not about whether any one particular claim can be replicated, right? It’s that collectively the claims don’t make sense.”

But … Darpa, though, right? Impossible problems! Here’s an internet we made you! Darpa! The agency has an overarching program called Next Generation Social Science, set up in 2016 to use economics, sociology, anthropology, and so on to better understand anything from terrorism to the spread of propaganda online. And, yes, it’s an impossible problem. “In emerging fields you begin to see the development of standards as a good signal that something’s happening there,” Russell says. “We certainly don’t have those standards in social sciences.”