I normally hate to be critical of specific charities, but I’m going to make an exception because I’ve just found one of the worst charities in the world.

Homeopaths Without Borders (HWB) has provided homeopathic care and education in Guatemala, El Salvador, the Dominican Republic and Sri Lanka. Since the 2010 earthquake in Haiti, it has focused efforts there, too. Besides minor ailments, HWB also treats malaria, typhoid, cholera, dengue fever, advanced diabetes, and educates about the “beneficial effects” of these treatments.

Laugh or cry? I can’t decide. There’s something really wrong with a company that deludes the barely educated global poor with the false hope of a malaria treatment–when they could have been seeking assistance that might actually save their life. It’s even more wrong that it can get the tax exemption status known as 501(c)3 in the US.

I have nothing against the people behind the organization; they likely have the noble intention of doing their best to alleviate suffering in the developing world. But here are the facts: Five independent meta-analyses by the Cochrane Collaboration, which gives impartial and independent summaries and analyses of relevant scientific literature, found no evidence that homeopathy outperforms placebo. In 2005, a meta-analysis by the Lancet of 110 studies of homeopathy and 110 studies of matched conventional medicine found no evidence that homeopathic medicine outperformed placebo. And the US government’s National Center for Complementary and Alternative Medicine says:

There is little evidence to support homeopathy as an effective treatment for any specific condition…. Several key concepts of homeopathy are inconsistent with fundamental concepts of chemistry and physics.

I contacted Homeopaths Without Borders for comment. The organization referred me to research and information provided by the non-independent and non-governmentally affiliated National Center For Homeopathy, and three studies that point to success better than that of placebo. Indeed, some studies suggest, with probability of less than 5%, that homeopathy outperforms placebo. The trouble is that, if you do enough studies, then simply through chance you’ll get some apparently positive results (an idea beautifully illustrated by Randall Munroe). That’s why we need to rely on analyses like the Cochrane Collaboration, rather than just some artificially selected corner of proof.

In its reply, HWB said “there is no better proof of its effectiveness than to see it ‘on the ground.’” But anecdotal “on the ground” evidence is problematic precisely because you don’t know whether people treated got better because of the treatment, or whether through placebo, or whether they just would have gotten better anyway, as many people do. Again, I don’t have much problem with people dishing out placebos if they make people’s lives better. But when those people might then not seek other treatment that is efficacious beyond placebo, that’s problematic.

Still, there’s some good to be gleaned from HWB. Mainly, the way we think about charity in general is all wrong, and HWB’s tax status is simply the most extreme end of a wider problem. If you’ve got a standard for assessing charity effectiveness where somehow HWB could carry out its homeopathy program and be a top-rated charity, then your metric is a non-starter. Unfortunately, this applies to almost all professional charity evaluators. Let’s look at some common standards:

1. Overhead Ratio

This one is pretty famous: the idea is that you look at how much a charity spends on overhead (such as fundraising and administration) and how much the charity spends on “program costs”—the actual activity that the charity is aiming to implement. According to this metric, the lower the ratio of overheads to program costs, the better the charity. The American Institute of Philanthropy relies pretty heavily on this metric; Charity Navigator, which evaluates and rates philanthropic endeavors, uses it as one input among others.

But let’s suppose that HWB had almost no overhead. All the employees are so enthusiastic about their mission that they volunteer, and they have enough supporters that they don’t need to spend money on fundraising—so overheads are tiny. Does this make the charity good? No. As long as it’s still distributing pseudo-medicine, it isn’t a good charity, because there’s no benefit to people. In fact, if they spent on “overhead” like research maybe they’d discover the scientific evidence in opposition to their mission and go do something else instead.

2. Salary of Highest Paid Employee

This metric, thankfully, has generally been dropped by professional charity evaluation organizations, but it’s common in the public assessment of charities. The idea is that the more a charity spends on salaries, the more “wasteful” it is, and so the less inclined you should be to give to it.

But now suppose that we find out that all employees of HWB work for the minimum wage or for free. Does that make HWB any better as a charity? Actually, for me that makes it sound a little scarier! Not only is it doing something ineffective—but with a missionary-like zeal!

3. Accountability and Transparency

As well as financial health, Charity Navigator also looks at accountability and transparency: for example, whether there’s a whistleblower policy, whether board minutes are kept, and whether the IRS Form 990 is linked to on its website. Again, HWB, if it wanted, could conform with all of this, while still distributing homeopathic remedies. So, by Charity Navigator’s lights, if it tried hard enough, it could be a top-rated charity.

4. How effectively does the charity achieve its aims?

Some charity evaluators are trying to move beyond financial health and instead look at “impact.” This seems promising, but, again, doing it misguidedly can fail the HWB test. For example, Charting Impact asks five questions in order to work out how well a charity lives up to its aims. But suppose that HWB had an aim to provide homeopathic treatment to 1 million people and it managed to do this on a tiny budget—say, $500,000. That would do nothing to benefit the global poor, but, by Charting Impact’s lights, it would be an amazing success! If the goals of the charity aren’t useful, then achieving those goals more effectively does nothing.

In general, charity evaluators too often focus on process rather than outcome. But it’s really only the outcome that we should think about.

Let’s compare this with market consumption—an area where humans tend to be much more rational—suppose you’re planning to buy an iPhone rather than a Nexus. Does it affect your decision if the CEO of Apple has a higher salary than that of Google? Or that Apple spends more on admin and marketing than Google does? Why should you care? All that matters is the quality of the product and its cost. In just the same way, all that matters when donating to charity is how good an outcome it will produce with your donation, at least as long as it isn’t actively harming anyone along the way. Show me a charity that spent 90% of its money on lavish parties for its CEO, bureaucratic admin, and had never filed a financial statement in its life, but spent the final 10% distributing long-lasting insecticide-treated bed nets in sub-Saharan Africa, and I would donate to it over HWB without a blink.

What’s true is that some focus on process can be relevant in combination with focus on outcomes. For example, you can only really know about the program’s cost-effectiveness if the charity is transparent (hence Givewell’s emphasis on transparency); and it’s unlikely that a charity can reliably do good if it has no control over its finances. So, all other things being equal, I’d prefer to give to Against Malaria Foundation (which, as well as working on an extremely cost-effective program, spends a tiny £80,000 annually on overhead, has an executive director who works for free, and manages to get a remarkable number of services pro bono), rather than the malaria charity with great parties I described above. But even then, the other metrics are useful only insofar as they help the charity produce better outcomes, or help you know that the charity is producing better outcomes.

There are some emerging charity evaluators that are genuinely outcome-focused. GiveWell and Giving What We Can (of which I’m a cofounder) explicitly focus on goods produced per dollar donated. And Charity Navigator is at least aiming in the right direction with a new intent to include “results reporting”—though, given its history, I still have my doubts.

We may not always be able to quantify or compare the outcomes of different charities across different causes. But let’s stop pretending that outcomes don’t matter.