In 1993, after five years of grad school and low-wage postdoctoral research, Michael Kremer got a job as a professor of economics at MIT. With his new salary, he finally had enough money to fund a long-held desire: to return to Kenya’s Western Province, where he had lived for a year after college, teaching in a rural farming community. He wanted to see the place again, reconnect with his host family and other friends he’d made there.

When he arrived the next summer, he found out that one of those friends had begun working for an education nonprofit called ICS Africa. At the time, there was a campaign, spearheaded by the World Bank, to provide free textbooks throughout sub-Saharan Africa, on the assumption that this would boost test scores and keep children in school longer. ICS had tasked Kremer’s friend with identifying target schools for such a giveaway.

While chatting with his friend about this, Kremer began to wonder: How did ICS know the campaign would work? It made sense in theory—free textbooks should mean more kids read them, so more kids learn from them—but they had no evidence to back that up. On the spot, Kremer suggested a rigorous way to evaluate the program: Identify twice the number of qualifying schools as it had the money to support. Then randomly pick half of those schools to receive the textbooks, while the rest got none. By comparing outcomes between the two cohorts, they could gauge whether the textbooks were making a difference.

What Kremer was suggesting is a scientific technique that has long been considered the gold standard in medical research: the randomized controlled trial. At the time, though, such trials were used almost exclusively in medicine—and were conducted by large, well-funded institutions with the necessary infrastructure and staff to manage such an operation. A randomized controlled trial was certainly not the domain of a recent PhD, partnering with a tiny NGO, out in the chaos of the developing world.

But soon after Kremer returned to the US, he was startled to get a call from his friend. ICS was interested in pursuing his idea. Sensing a rare research opportunity, Kremer flew back to Kenya and set to work. By any measure it was a quixotic project. The farmers of western Kenya lived in poverty, exposed to drought, flood, famine, and disease. Lack of paved roads hampered travel; lack of phones impeded communication; lack of government records stymied data collection; lack of literate workers slowed student testing. For that matter, a lack of funds limited the scope. It was hardly an ideal laboratory for a multiyear controlled trial, and not exactly a prudent undertaking for a young professor with a publishing track record to build.

The study wound up taking four years, but eventually Kremer had a result: The free textbooks didn’t work. Standardized tests given to all students in the study showed no evidence of improvement on average. The disappointing conclusion launched ICS and Kremer on a quest to discover why the giveaway wasn’t helping students learn, and what programs might be a better investment.

As Kremer was realizing, the campaign for free textbooks was just one of countless development initiatives that spend money in a near-total absence of real-world data. Over the past 50 years, developed countries have spent something like $6.5 trillion on assistance to the developing world, most of those outlays guided by little more than macroeconomic theories, anecdotal evidence, and good intentions. But if it were possible to measure the effects of initiatives, governments and nonprofits could determine which programs actually made the biggest difference. Kremer began collaborating with other economists and NGOs in Kenya and India to test more strategies for bolstering health and education.

At home, meanwhile, his work was helping to inspire a small movement of economists and other social scientists—playfully dubbed the “randomistas,” in reference to the randomized nature of the studies. In 2003, a few years after Kremer had moved across town to Harvard, three like-minded economists at MIT launched a research institution, now called J-PAL (the full moniker is the Abdul Latif Jameel Poverty Action Lab, named for the late father of a donor), to promote the use of randomized controlled trials on questions of poverty and development. They work closely with an independent sister NGO, Innovations for Poverty Action (IPA), which implements evaluations in the field. Kremer joined both groups as an affiliated researcher.

In the decade since their founding, J-PAL and IPA have helped 150 researchers conduct more than 425 randomized controlled trials in 55 countries, testing hypotheses on subjects ranging from education to agriculture, microfinance to malaria prevention, with new uses cropping up every year (see “Randomize Everything,” below). Economists trained on randomized controlled trials now work in the faculties of top programs, and some universities have set up their own centers to support their growing rosters of experiments in the social sciences.

To find out what works on the ground, you need to climb down from the ivory tower and do some serious legwork in the places you’re trying to help.

Their results have challenged—or, in some cases, confirmed with hard data—widely held beliefs about aid strategies that command billions of dollars in annual outlay. It turns out that retrospective analysis of a program’s impact or even suggestive case studies from a few targeted households can be worse than useless in understanding how a program actually affects a community in the real world.

J-PAL researchers will be the first to caution that each study is specific to its context. What works in one community may not in another. But in the realm of human behavior, just as in the realm of medicine, there’s no better way to gain insight than to compare the effect of an intervention to the effect of doing nothing at all. That is: You need a randomized controlled trial. And that means you need to climb down from the ivory tower and do some serious legwork in the places you’re trying to help.

The first thing you need to know about randomized controlled trials, especially those pertaining to economics and human behavior, is that they’re hard—very hard. To evaluate the textbook campaign, ICS Africa had to collaborate with the Kenyan education ministry to choose 100 schools in the rural Western Province, ensure the textbooks got to the assigned schools, and develop and administer tests to thousands of students whom ICS then tracked for the next four years. And that was simple compared with some of the other developing-world trials that IPA researchers have gone on to construct. For example, when two researchers were trying to study the effects of different pricing models for getting people to use bed nets to protect against malaria, the local nonprofit that sold subsidized nets declined to help them with the research or even sell them nets at the discounted rate. So the team had to spend a year recruiting a local research staff and drumming up funds to buy thousands of bed nets. Then they had to win the cooperation of 20 different prenatal clinics in Kenya, and then they had to oversee the experiment as it tracked the behavior of 10,000 pregnant women.

But there’s a beautiful utility that can emerge from accumulating data, as interventions that researchers expect to be marginal—or, in some cases, weren’t even thought of when the study began—reveal themselves to be highly effective. For example, after ICS’s textbook campaign, it worked with Kremer to test a host of other strategies for increasing school participation rates in western Kenya, from subsidized meals to free uniforms to merit scholarships. One of the most cost-effective ways to boost attendance came as a big surprise: treatment for intestinal worms, which caused absenteeism to drop by one-quarter. And it wasn’t only the schools receiving treatment that benefited. Attendance also rose at nearby schools as the overall transmission rate in the region dropped. The researchers calculated that, on average, deworming “buys” one extra year of school attendance for just $3.50, less expensive than any other intervention tested. This unexpected finding has led researchers to found an initiative called Deworm the World, which has worked in partnership with governments and NGOs to treat 37 million children.

Similarly, a 2004 experiment to promote water treatment wound up suggesting a solution that the researchers hadn’t imagined. In this case, the larger goal was to combat diarrheal diseases, which kill millions of people every year, especially children under 5. Chlorine treatments can render water safe; but despite years of education efforts in Kenya, few people purchased and used the chlorine solution, even though it was widely available.

To test remedies, the researchers identified 88 springs that supplied nearly 2,000 households in western Kenya. Surveys of local women, who usually collect the water for the family and monitor children’s health, found that 70 to 90 percent knew about the chlorine product but only 5 percent used it, and IPA’s in-home tests detected chlorine in the water of just 2 percent of households. These women knew how to make their water safe, but they weren’t doing it.

After that, the researchers spent four years testing different interventions. Giving away the chlorine solution helped in the near term, but when the free supply ran out, usage fell off. Half-off coupons for chlorine were a bust; out of 2,724 coupons handed out, just 10 percent were ever redeemed. The study also tested whether local “promoters,” sent door to door with one free voucher per family, might succeed in evangelizing the use of chlorine among their neighbors. Promoters did make a difference in the short term; in this cohort, 40 percent of household water samples showed evidence of chlorine. But that number fell significantly when the vouchers ran out.