Each time we conduct our evaluation process, we gain new insights that allow us to improve our research. We feel it’s important to share these insights with our audience in order to solicit feedback, explain potential changes in future evaluations or our general research strategy, and increase the transparency of our decision process. In this post, we present some of the things that we found surprising after completing our December 2016 evaluation process. Although some things surprised more than one of us, there was enough variation that each staff member active in the review process has included what surprised them individually.

Allison Smith, Director of Research

More Charities Agreeing to Publication of Exploratory Reviews

Last year, Jon was surprised by the small number of charities that allowed us to publish shallow reviews. This year we changed our process slightly for that review level, adding a call with a representative of the charity as part of our research process, and renaming it the “exploratory” level of review. I thought that adding the call would probably make charities feel more positively about our reviews and gain a greater understanding of our process, but I didn’t anticipate that it would make a substantial difference in how many reviews we would be able to publish.

Compared to last year, this year we had a major decrease in the number of exploratory reviews that we wrote and were unable to publish—down from 10 to 1. While I hope that part of this decrease is due to ACE becoming seen as a more important and reliable source of information, I also think adding the calls to our exploratory research process was partially responsible for the change. Some charities opted out of the review when we tried to set up a call with them, reducing the number of reviews we wrote. This is an opportunity they wouldn’t have had if we were using the same process we used in 2015. Other charities engaged with us in fairly lengthy processes of editing and discussing reviews which ultimately did lead to publication, something that I think occurred less often in previous years. This outcome might have been in part due to our time spent talking with them before writing the reviews.

Calculating Cost-Effectiveness with Uncertainty Was Easier Than Expected

This year we made two major changes to our cost-effectiveness estimates: providing most figures as ranges to indicate uncertainty, and adding estimates for the number of years of suffering spared (alongside the number of animals spared). We’d talked about implementing both of those changes in the past, and I viewed incorporating uncertainty as both the more difficult and more important change. It was definitely the change that made the most difference in how we did the computations this year; while we added a few cells to each computation to calculate the number of years of suffering spared, we completely changed the platform we were using for computations to better integrate uncertainty into our calculations (from standard spreadsheets to Guesstimate).

Once we decided to use Guesstimate to handle our calculations, I expected most of the additional difficulty in incorporating uncertainty to come from having to produce not just an accurate estimate of each quantity, but also appropriate lower and upper confidence bounds. (This isn’t what would have been hardest if we’d incorporated uncertainty in all our computations last year; at that time, finding a cost-effective tool or correctly handling uncertainty in varied computations without a special tool would have been a significant barrier. Guesstimate wasn’t fully-featured enough to handle our needs until late September this year—and in fact, there are features we wish it had that it still doesn’t, such as the ability to produce estimates that don’t change when the page is reloaded.) In practice, this wasn’t as hard as I expected, because for uncertain quantities it can be easier to identify a range inside which you expect the true value to lie (a subjective confidence interval) than a single most likely value.

Having cost-effectiveness information provided as ranges made it easier to communicate uncertainty within the group of people involved in our recommendation process, as well as to our audience. When people were surprised by the size of a particular range, or really disagreed with a range, it often led to useful qualitative discussions about the factors that had produced the range. Using ranges also allowed us to include some slightly more speculative factors we might have been too worried to include if we’d had to provide cost-effectiveness as point estimates.

Jacy Reese, Researcher

More Room for Improvement in ACE Process Than Expected

I left the ACE Board of Directors and joined as a full-time Researcher midway through last year’s review process, so I didn’t have as full of a perspective on the evaluation process as I did this year. This year, I saw more room for improvement than I expected to find, from small things like the consistency of our wording to big-picture decisions like how many months to spend on the review process. Many of these are hard to identify from outside the ACE research team, which I didn’t appreciate last year.

While some of those improvements—such as wording—can be done quickly, others are more difficult to implement. For example, I wish we had more developed views on what makes an organization highly effective from an operations perspective (e.g. what impact an extensive strategic plan has on effectiveness), but this would require months of research. The feedback on our reviews from the board this year also highlighted time-intensive improvements, asking us for more information or analysis on specific topics. This also relates to my general view that we need to allocate more time towards foundational research on effective animal advocacy topics relative to the time we spend on charity and intervention evaluations.

Some Charities Were More Okay With Critical Content

I can’t really elaborate here due to confidentiality, but my expectation is that charities will usually strongly oppose the publication of critical/negative content about their work. However, this year some charities were more okay with this than I expected, possibly because I underestimated how much they genuinely value contributing to the ACE process of writing and publishing honest reviews—or because the charities have a lower bar than I do for how positive a piece has to be in order for it to be considered good press for the charity.

Increased Weight on High-Quality Quantitative Cost-Effectiveness Estimates

Our cost-effectiveness estimates improved a lot this year, with both the Guesstimate and social media calculators. I also saw some interesting and compelling quantitative estimates outside of ACE, like one comparing the work of GiveWell-recommended charities to ACE’s, and another comparing several EA interventions from a far future perspective. On the qualitative front, it has been difficult to improve our evaluation methods; I sometimes feel that the qualitative process can’t get significantly better than “list out evidence, think about it for a bit, then pop out an answer.” Other ways of improving it just turn the process into quantification, such as giving a 1–10 rating for each criteria for each charity. While I remain more optimistic about qualitative strategies than many of my EA colleagues, I’m a little less so after this review season. Note that I think there are still significant issues with the communication of quantitative approaches; my update is just in terms of their usefulness for accurately forming beliefs about advocacy effectiveness.

Toni Adleberg, Research Associate

I was involved in ACE’s charity evaluation process for the first time this year, and I didn’t begin with many strong expectations about how it would go. Still, there were several things I found surprising.

The Importance of Criteria One: Room for More Funding

After we evaluated each charity using our seven criteria, we began considering which charities to recommend. For me, a few of those decisions came down to one factor: room for more funding. You may have noticed that our Top Charities have significantly greater room for more funding than many of our Standout Charities and the other charities we’ve considered.

In retrospect, the importance of room for more funding probably shouldn’t have been a surprise. We moved a significant amount of money this year to our Top Charities, and we expect that amount to continue increasing over the coming years. We recommend a small number of charities, and we want to be sure that they can absorb all ACE-directed funding effectively.

The Number of Charities that are Focused on Effectiveness

Before joining ACE, I didn’t realize how many animal charities routinely consider cost-effectiveness when making decisions. I knew that most ACE Top and Standout Charities do highly cost-effective work, but I didn’t expect so many charities to raise the issue of cost-effectiveness in our conversations with them. Occasionally, I wondered whether a charity had seen our evaluation criteria and were trying to sell themselves to us. However, many charities told us about their specific strategies for achieving the largest impact possible with their resources, and it was clear in those cases that they had already thought a lot about the cost-effectiveness of their work.

Of course, the charities we evaluated were selected in part based on whether their work seemed cost-effective, so the proportion of groups that consider cost-effectiveness is likely much higher among the charities we evaluated than it is among all animal charities. Still, I was pleasantly surprised at the sheer number of charities who told us that their bottom line is helping as many animals as possible.