Introduction

When our research team is not evaluating charities, we are often thinking about how to better evaluate charities. Every year, we try to improve our process. In 2016, we revised some of our evaluation criteria and introduced “exploratory” and “comprehensive” reviews. In 2017, we deepened our investigations of workplace culture and began offering charities small grants for participating in our evaluations.

This year, we are making a number of changes, including:

Here, we explain what these changes entail and how they might affect our process and recommendations. As always, feel free to contact us with questions or feedback.

Increased Consultation with Specialists

Effective animal advocacy research is a young field, and—to our knowledge—there are no other groups doing precisely what we do. However, that doesn’t mean that we research and develop our evaluation criteria entirely on our own. We draw on existing research and seek input from specialists in related fields whenever possible.

This year, we are more actively and systematically seeking input from external specialists on each of our charity evaluation criteria. We are grateful to the following individuals for agreeing to share their expertise with us:

Criterion Consultant 1. Charity has room for more funding and concrete plans for growth. Sagar Shah, Economist at the Bank of England 2. The charity engages in programs that seem likely to be highly impactful. Zach Groff, PhD Student in Economics at Stanford University 3. The charity operates cost-effectively, according to our best estimates. John Halstead, Researcher at Founders Pledge 4. The charity possesses a strong track record of success. Ashwin Acharya, Former Research Associate at Animal Charity Evaluators, currently conducting research with the Foundational Research Institute 5. The charity identifies areas of success and failure and responds appropriately. Steven Mayer, Professor in Program Development and Evaluation at Johns Hopkins University, Krieger School of Arts and Sciences 6. The charity has strong leadership and a well-developed strategic vision. Nancy Aries, Professor in Nonprofit Management at Baruch College, Marxe School of Public and International Affairs 7. The charity has a healthy culture and a sustainable structure. Audrey Lawson-Sanchez, Executive Director at Balanced

We plan to seek our consultants’ input twice during our evaluation process: once before drafting our reviews and once after. By soliciting a variety of viewpoints, we hope to reduce the chances that we will miss important considerations. Note that our consultants will not be responsible for selecting our recommended charities. We welcome their input, but all final decisions will be made by our evaluations team and Executive Director.

Updated Methodology for Estimating Room for Funding

Last year, we started using Guesstimate models to better estimate each charity’s room for more funding (RFMF). This allowed us to make more detailed estimates than in previous years, however it also highlighted a limitation in our then-current thinking about RFMF. We had previously been treating all funding gaps that a charity might have as being equal in priority, when this isn’t likely the case. For example, if one charity needed $500,000 to hold in reserve for a legal retainer, and another charity needed $500,000 to hire new staff for their corporate outreach program, we believe the latter would be of much higher priority to fund. However, we would not have been able to make this distinction in one of our previous models. As several of our charities had similar amounts of total RFMF in 2017, we felt there was room for our models to provide more nuanced information so as to further distinguish between them.

We have thus implemented a system that divides different types of spending into three priority levels: high, moderate, and low priority funding. We categorize funding into these levels primarily according to our assessments of intervention effectiveness. As part of our evaluation process, our research staff rates the effectiveness of interventions commonly used by charities. If the majority of staff rate an intervention as likely to be effective or ineffective, it is assigned to priority level 1 or 3, respectively. If the majority of staff felt there was not enough evidence for an intervention’s effectiveness, or if staff had mixed opinions of an intervention’s effectiveness, it is assigned to priority level 2.

In addition to this programmatic division, there are a few examples of other funding gaps that we prioritize. Our soon-to-be-published research on the allocation of resources within the movement suggests that capacity-building interventions are generally underfunded, so we have tended towards including capacity-building activities in level 1. This includes senior staff hires, expansion into new countries (especially the BRIC countries), and any interventions that we believe are especially effective at building capacity. Similarly, there are some funding gaps that we do not think should be prioritized very highly, and these have been allocated to level 3. These include reserved funding—whether for general replenishment of cash reserves or for more specific uses, such as legal retainers—and our estimate for possible additional expenditure that has not been accounted for elsewhere in the model. The distribution of funding across the three priority gaps will be discussed and displayed in a chart at the end of this section of the review.

Organizational Culture Surveys

Last year, we deepened our investigations of each charity’s work culture by conducting confidential interviews with non-leadership staff and/or volunteers at each charity under evaluation. We found these interviews highly valuable; they lent us insight on the culture of each charity as well as on the general factors that make for a healthy workplace. However, we know that these confidential calls, on their own, are not an ideal way to evaluate a charity’s culture. Though we made an effort to select staff at random, one or two staff members may not represent the attitudes of a charity’s entire staff. In the future, we may try to interview a larger number of staff in proportion to each charity’s size. However, we currently lack the staff time to conduct more than two confidential interviews per charity.

To gain a broader picture of each charity’s culture, we are adding a new component to our culture investigations this year. In addition to our confidential calls, we will ask each charity to distribute a short online culture survey to all staff. The survey is still in development, but it will include questions on the following broad topics:

General satisfaction with management and human resources

Ease of internal communication (including systems for reporting problems or providing feedback to leadership)

Diversity, equity, and inclusion

Harassment and discrimination

Once it’s complete, we plan to make the questionnaire publicly available for any charity to use. In our reviews, we will discuss some overall patterns in the survey responses but we will not share potentially identifiable individual responses.

Discussions of Integrity and Long-Term Impact

While we are not making any major changes to our evaluation criteria this year, there are a couple of points that we’ve chosen to incorporate more explicitly into our existing criteria. First, we are incorporating a discussion of long-term impact into Criterion 2 (“The charity engages in programs that seem likely to be highly impactful”). Second, we are introducing a discussion of integrity to Criterion 7 (“The charity has a healthy culture and a sustainable structure”).

Assessing each charity’s long-term impact is one of the most difficult parts of our evaluations. When we model the cost-effectiveness of a charity, we exclude especially long-term or indirect outcomes, in part because of our high uncertainty about how to quantify them. However, we do try to qualitatively evaluate the long-term impact of a charity in Criterion 2. This year, we’ve decided to do so more explicitly in a dedicated subsection of Criterion 2.

We believe that the concept of integrity is a valuable addition to our discussions of workplace culture. We will be assessing the extent to which each charity’s actions are aligned with their professed values. Of course, integrity is not perfectly correlated with effectiveness; a charity might significantly and efficiently reduce suffering yet lie to its stakeholders. Conversely, a charity might act with total integrity yet fail to accomplish anything for animals. For our purposes, however, since we are primarily evaluating organizations who pledge to promote equality and/or reduce suffering, it is worth considering—and it is of interest to donors—whether all of their actions are aligned with those values.

Our New Charity Evaluation Handbook

For the first time, we’ve developed a handbook for the charities participating in our evaluation process. The handbook contains a schedule of our evaluation season, a description of the process, and the details of our policies. We hope that the handbook will be a useful guide for the charities we work with. Since it’s available online, it also contributes to our transparency regarding our policies and procedures.

Logistical Changes for Increasing Efficiency

We’re always looking for ways to make our evaluation process run more smoothly and to allow our Researchers more time for drafting the reviews. This year, we’ve made a number of small process improvements that we believe will increase our efficiency. For example, we’ve identified several small tasks that we can complete early in the summer in order to free up time during our busier months. We are also seeking more help from volunteers and interns this year for some of those tasks, such as compiling relevant data for use in our cost-effectiveness models. We’ve decided to write fewer (if any) exploratory reviews so that we can focus our attention on our comprehensive reviews, and we’ve shifted many of our deadlines earlier so that we have more time for revisions.

Final Thoughts

At ACE, we continually assess our own programs and actively look for ways to improve. We discuss potential changes to our evaluation process each year during a “post-mortem” meeting that we hold soon after the publication of our reviews—while the process is fresh in our minds. We also solicit feedback from the charities we’ve evaluated to consider ways in which we can improve the process for them. We hope that the changes we’re implementing this year will result in improvements in (i) the experience of the charities we work with, (ii) our efficiency, and (iii) the quality of our reviews. If you have further suggestions for ways that we can improve our evaluation process, feel free to contact us.