As part of our research on the history of philanthropy, I (Luke Muehlhauser) investigated several case studies of early field growth, especially those in which philanthropists purposely tried to grow the size and impact of a (typically) young and small field of research or advocacy. As discussed below, my investigations had varying levels of depth.

Those interested in further reading on the cases discussed here can consult the annotated bibliography, which gives brief notes on the sources I found most helpful.



Published: April 2017, Updated: August 2017

My process and key takeaways

To find potential case studies on philanthropic field-building, I surveyed our earlier work on the history of philanthropy, skimmed through the many additional case studies collected in The Almanac of American Philanthropy, asked staff for additional suggestions, and drew upon my own knowledge of the history of some fields.

My choices about which case studies to look at more closely were based mostly on some combination of (1) the apparent similarity of the case study to our mid-2016 perception of the state of the nascent field of research addressing potential risks from advanced AI (the current focus area of ours where the relevant fields seem most nascent, and where we’re most likely to apply lessons from this investigation in the short term), and (2) the apparent availability and helpfulness of sources covering the history of the case study.

I read and/or skimmed the sources listed in the annotated bibliography below, taking notes as I went. I then wrote up my impressions (based on these notes) of how the relevant fields developed, what role (if any) philanthropy seemed to play, and anything else I found interesting. After a fairly thorough look at bioethics, I did quicker and more impressionistic investigations and write-ups on a number of other fields.

My key takeaways — both from the case studies I describe below and from some other case studies I studied briefly but do not describe below — are:

Most of the “obvious” methods for building up a young field have been tried, and those methods often work. For example, when trying to build up a young field of academic research, it often works to fund workshops, conferences, fellowships, courses, professorships, centers, requests for proposals, etc. Or when trying to build up a new advocacy community, it often works to fund student clubs, local gatherings, popular media, etc.

Fields vary hugely along several dimensions, including (1) primary sources of funding (e.g. large philanthropists, many small donors, governments, companies), (2) whether engaged philanthropists were “active” or “passive” in their funding strategy, and (3) how much the growth of the field can be attributed to endogenous factors (e.g. explicit movement-building work) vs. exogenous factors (e.g. changing geopolitical conditions).

Besides these major takeaways, I also learned many more specific things about particular fields. For example:

The rise of bioethics seems to be a case study in the transfer of authority over a domain (medical ethics) from one group (doctors) to another (bioethicists), in large part due to the first group’s relative neglect of that domain. [More]

In the case of cryonics and molecular nanotechnology, plausibly growth-stunting adversarial dynamics arose between advocates of these young fields and the scientists in adjacent fields (cryobiology and chemistry, respectively). These adversarial dynamics seem to have arisen, in part, due to the young fields’ early focus on popular outreach prior to doing much scientific or technical work, and their disparagement of those in adjacent fields. [More]

The rise of neoliberalism is a victory for an explicit strategy of decades-long investment in the academic development and intellectual spreading of a particular set of ideas, though this model may not work as well when the ideas themselves don’t happen to benefit a naturally well-resourced set of funders (large corporations and their wealthy owners, as in the case of neoliberalism). [More]

A small group of funders of the conservative legal movement managed to critique their own (joint) strategy, change course, and succeed as a result. [More]

The rise of the environmental and animal advocacy movements contrast sharply with the cases above, both because they grew mostly via a large network of small funders rather than a small network of large funders, and because many of those movements’ activities do not materially benefit any funder or political actor (e.g. in the case of wilderness preservation or campaigns against factory farming). [More]

Bioethics

Annotated bibliography for this section

How bioethics began

The birth of the field of bioethics, as I understand it, provides an interesting case study in the transfer of authority over a domain (medical ethics) from one group (doctors) to another (bioethicists), in large part due to the first group’s relative neglect of that domain. After consulting a few of the available histories (see below), my impression is that the field developed roughly like this:

For many decades, doctors were the presumed authorities on medical ethics, and their approach was fairly pragmatic and utilitarian, i.e. focused on competently and professionally doing what is best for the patient.

Starting in the 1960s, new medical capabilities (e.g. heart transplants) and some medical ethics scandals (e.g. the Tuskegee syphilis experiment) seemed to demand ethical analysis, but for the most part, the professional medical community generally didn’t want to spend its time with such “distractions” from the practice of medicine.

A mix of scholars, often theologians or philosophers, began to fill this void by devoting themselves full-time to studying and writing about questions of medical ethics. These people began to call themselves “bioethicists.”

Then, when some key government commissions and court cases came about in the 70s and 80s, the bioethicists had done enough work to establish themselves as “the experts” on these topics that they had a large and lasting influence on some important early laws and court decisions concerning various issues in medical ethics. Since the medical community had also neglected to develop curricular materials for teaching medical ethics, this void was also filled by texts written by bioethicists rather than by medical professionals, and thus whole generations of medical professionals were trained in the bioethicists’ early approach to medical ethics rather than (say) an approach developed by doctors.

These developments annoyed many medical professionals. In part, this was because they felt that professional medical expertise was necessary (and perhaps sufficient) for thinking through the ethical issues that arise in the practice of medicine. Another source of annoyance may have been that bioethicists of the time tended to be more theological and deontological (i.e. less utilitarian), and more cautious about developing and deploying new medical capabilities, compared to doctors.

The early laws and court decisions related to bioethics continue to have an outsized effect, though bioethicists today are probably more diverse than they were in the earliest years of bioethics, and (e.g.) many of them are explicitly utilitarian.

Perhaps the most compelling case for this “bioethics as a response to a vacuum of moral authority” account of the rise of bioethics can be found in Baker (2013), pp. 277-279:

As to the… most intriguing question, “Why was bioethics was born?”… each historian seems to touch on a piece of the answer to the question… Yet none offers a comprehensive answer to the more basic question: Why did American medicine lose jurisdiction over “medical ethics” — a subject who’s very name proclaims it part of medicine’s domain? …Before turning to this question, it is important to appreciate that none of the factors typically cited as explaining the American origins of bioethics was unique to America… Americans had no monopoly on morally disruptive innovations, and these innovations were as disruptive to medical morality and ethics in Europe and the rest of the world as they were in the United States… American medical societies adopted something akin to a self-imposed “prohibition” on medical ethics, creating the environment in which a robust alternative, bioethics, developed and was then exported worldwide. More specifically… the [AMA’s] adoption of a laissez-faire approach to medical ethics from 1903 through the 1970s was, in effect, a self-imposed prohibition against making authoritative statements on medical ethics. During this entire period, the AMA characterized its Principles of Medical Ethics as “standards by which a physician may determine the propriety of his conduct,” deeding the prerogative of interpreting professional standards to each individual physician’s personal moral sensibilities. Consequently, just as the U.S. prohibition on alcoholic beverages created a void in the marketplace that was filled by an alternative beverage industry — the colas — so, too, organized medicine’s laissez-faire abandonment of medical ethics created a void in the marketplace of ideas and a vacuum of moral authority. To fill this void, legislators, bureaucrats, the courts, and American society generally sought ideas and invested moral authority elsewhere, ultimately finding it in an oddball collection of lumpen intellentsia [definition: “A section of the intelligentsia regarded as making no useful contribution to society, or as lacking taste, culture, etc.”] who were soon valorized as ethics experts or “bioethicists.” In Europe, by contrast, organized medicine neither abandoned medical ethics nor abdicated moral authority. Consequently, just as alcoholic and caffeinated beverages retained jurisdiction over social life in European pubs and cafes, rendering soft drinks to the status of second-class beverages, so, too, organized medical and scientific societies (e.g., the British and Dutch medical societies and specialty colleges) retained jurisdiction over medical ethics — relegating aspiring European bioethicists to the status of second-tier authorities. Thus, the Royal Dutch Medical Association… was able to negotiate physician-initiated euthanasia practices with Dutch legal authorities without involving “bioethicists” in any major decision. Similarly, the British National Health Service… was also able to initiate a covert rationing scheme limiting use of dialysis and other expensive technologies to younger patients — effectively resolving the rationing problem created by the Scribner shunt by denying access to the elderly — without annoying discussions or protests from “bioethicists.” Having retained jurisdiction and moral authority over medical ethics, organized medicine in Europe had the prerogative of negotiating with governments to determine the appropriate nature of end-of-life care (euthanasia) or the allocation of scarce resources (age rationing). In America, by contrast, laissez-faire ethics rendered medicine unwilling to express authoritative moral positions and thus unable to negotiate them with the U.S. government. Thus, these issues were negotiated with “outsiders” invited into the once exclusively medical jurisdiction of “medical” ethics; that is, they were negotiated with “bioethicists.” …to deal with American medicine’s abdication from moral authority, American bureaucrats joined with government and private foundations to empower a hodgepodge of ex-theologians, lawyers, philosophers, social scientists, and humanistic nurses, physicians, and researchers to address issues raised by research ethics scandals and by morally disruptive technologies… This chapter reassembles materials that have, for the most part, been cited in standard histories of bioethics to support a vacuum-of-moral-authority explanation of why Americans invented bioethics. The account places emphasis on the dearth of “ethicists” at the early stages of “ethics regulation”; the ineffectiveness of pre-bioethical self-regulatory efforts; the role of the AMA’s opposition to Medicare, Medicaid, and the racial integration of medicine as a “distraction”; and the extent to which the AMA’s laissez-faire ethics constrained the AMA from responding to moral issues, including the AIDS epidemic.

Baker’s account (pp. 303-305) of bioethicists’ takeover of medical ethics education is similarly compelling:

As late as 1983, “A National Survey of Hospital Ethics Committees,” involving more than 400 hospitals with over 200 beds, found that only 4.3 percent had hospital ethics committees (HECs) and that most of these committees had been formed around 1977, the year after the [Karen] Quinlan [court] decision and the publication of a seminal article about an ethics committee at the Massachusetts General Hospital… In the absence of leadership from the AMA, which had opposed the very idea of ethics committees, only a few hospitals, mostly in large academic medical centers, had explored the use of ethics committees. Yet the empirical data indicated that committees were effective in ameliorating the moral distress caused by chaotic laissez-faire decision-making procedures. So the President’s Commission encouraged other hospitals to establish HECs [hospital ethics committees] by publishing in its appendix “A Model Bill to Establish Hospital Ethics Committees… Subsequently, just two years later, in 1985, the number of hospitals with HECs had climbed to 60 percent. In 1988, the Joint Commission on the Accreditation of Health Care Organizations… introduced a standard requiring the hospitals and healthcare institutions that it accredited to have the equivalent of HECs. Today, virtually all American hospitals and healthcare institutions have HECs or their equivalent. As Jonsen observes, “ethics committees were set an odd task,” for, unlike admissions committees or pathology committees, “they had no well defined task to perform; they were ordered to think about ethics, probably the vaguest and most controversial topics,” without a “touchstone beyond, perhaps, the skimpy code of the AMA.” Compounding the problem, organized medicine’s de-emphasis of medical ethics was reflected in the American medical school curriculum. Thus, in 1972, a survey of 102 American medical schools found that none of the 94 schools responding required medical students to take a course in medical ethics. To reiterate, in 1972, no American medical school offered a required course on medical ethics. [emphasis added] Fifteen medical schools openly admitted to offering no medical ethics instruction whatsoever; fifty-six responded that they touched on the subject in courses in related areas — social medicine, legal medicine, psychiatry — about one-third… gave students the option of taking an elective course on the medical ethics. Thus, in 1972, no American medical school thought medical ethics important enough to be taught to all future physicians. A decade later, in 1984 — after the advent of bioethics — 84 percent of medical schools required students to take a course in medical ethics or bioethics during their first two years of instruction. In 1998, the American Association of Medical Colleges… adopted as a learning goal for all accredited medical schools “knowledge of the theories and principles that govern ethical decision-making and of the major ethical dilemmas in medicine…” A survey a decade after that, in 2008, reported that “in compliance with the [AAMC learning objectives] all 59 medical schools in the dataset required coursework in bioethics”… The use of the term “bioethics” rather than “medical ethics” in the 2008 survey is revealing; so, too, is AAMC’s reference to “principles” and genetics in its 1998 statement of learning goals. During the era when medical schools [did not require] instruction in medical ethics, no market for medical ethics textbooks existed, and so none was published. Thus, when American medical colleges began to require instruction in the subject, instructors found Beauchamp and Childress’s Principles of Biomedical Ethics available to fill the void, and, thus, bioethical discourse and principles naturally came to occupy the space in the medical school curriculum previously taught under the rubric “medical ethics.” In consequence, the founding generation of ethics committee members and successive generations of medical students learned to talk and think in terms of bioethical principles — autonomy, justice, nonmaleficence, and beneficence — rather than in the in terms of the AMA’s Principles or other traditional discourses of medical ethics.

Of course, my summary here is a gross oversimplification of a complicated historical development. Moreover, it might be substantially wrong: certainly, not every bioethicist is likely to agree with my summary, and my impressions come only from reading or skimming a few of the major published histories of bioethics and drawing my own tentative conclusions.

The role of philanthropy

The birth of bioethics was substantially funded by philanthropists, among other sources (notably, the National Endowment for the Humanities). For example:

The Rockefeller Foundation provided substantial initial funding for the Hastings Center, the first major institute focused on bioethics. As the Hastings Center grew during the 1970s, it continued to be substantially funded by philanthropists. Among other early activities, the Hastings Center hired some staff researchers, organized workshops, created a visiting scholars program, and created The Hastings Center Report, which soon became the leading journal in the field.

The Kennedy Foundation funded the 1971 creation of the Kennedy Institute of Ethics, the second major bioethics institute. Within 3 years, the Kennedy Institute grew to 20 full-time scholars and 55 graduate students. Among other things, the Institute built a large research library on bioethics, organized lectures and symposia and classes, produced TV programs on bioethics, edited an encyclopedia and bibliography of bioethics, and more. Early hires at the Kennedy Institute authored the Belmont Report and the Principles of Biomedical Ethics, two of the most influential documents in the history of the field.

The Kennedy Foundation also funded, for example, a 1972 medical ethics program at Harvard’s School of Public Health, and (in 1973) “the first full-fledged medical school program in medical ethics” at the University of Wisconsin.

The Russell Sage Foundation provided substantial initial funding for the Society for Health and Human Values, another early bioethics institute (albeit less influential than the Hastings Center and the Kennedy Institute).

In general, it seems to me that philanthropists and other early funders of bioethics did most of “the obvious things” one might do to build up a new field, and they “got lucky” in what a revolutionary impact those early efforts had (see previous section). As far as I can tell, for $10-$20M in funding from the late 1960s through the 1970s, these funders likely played a major role in completely revolutionizing the field of medical ethics. (Whether this revolution was a positive or negative development overall has been debated, and there is room to debate how important philanthropy was to it, but the revolution’s impact is not debatable.)

What was the counterfactual difference made by philanthropy in this case? The sense I got from my readings is that the field would have grown much more slowly if not for the early philanthropic funders, but it’s hard to know for sure.

To what degree were bioethics’ early philanthropic funders “passive” vs. “active” (see this blog post)? Unfortunately, I didn’t learn much about this from my readings, but one could learn about this by talking to the early funders and grantees, as many of them are still alive.

Case studies I investigated less thoroughly

I looked at several other case studies of field building, but I investigated them less thoroughly than I investigated the case of bioethics. Below, I summarize my impressions about these other case studies, based on the sources listed in the annotated bibliography. To save time, I (mostly) do not cite or quote the specific sources for my impressions (like I did in the above section on bioethics). Compared to my impressions about bioethics, my impressions about the case studies covered below are even more likely to be mistaken.

Failure modes in cryonics and molecular nanotechnology

Annotated bibliographies for this section: cryonics, nanotechnology

Two fields I studied — cryonics and molecular nanotechnology — saw especially slow, anemic field growth. Since they also exhibited some of the same apparent “failure modes,” I’ll discuss them together.

Cryonics refers to the “low-temperature preservation… of people who cannot be sustained by contemporary medicine, with the hope that resuscitation and restoration to full health may be possible in the far future” via distant medical advances. Since the practice began in the 1960s, only about 250 people have been cryopreserved by companies such as Alcor. To this day, cryonics is not part of normal medical practice, it is regarded with great skepticism by the mainstream scientific community, and it has not been graced with much funding or scientific attention.

Molecular nanotechnology (MNT) is a proposed technology involving very small, mobile “assemblers” which can bond atoms to each other with great precision, and thereby “build almost anything that the laws of nature allow to exist.” Despite steady advocacy by small numbers of people since the mid-1980s, MNT still has not attracted significant funding or scientific attention, and when President George W. Bush created the National Nanotechnology Initiative in 2003, the creator of the field (Eric Drexler) largely was excluded, and his ideas about MNT were sidelined in favor of more feasible work in chemistry and materials science, which constitutes most of what is called “nanotechnology” today.

Why have these fields failed to grow much over the course of several decades? Perhaps their slow growth is entirely explained by the questionable feasibility of their foundational claims (about resuscitation and molecular assemblers). However, when learning about the history of cryonics and MNT (for sources, see below ), I encountered several additional potentially growth-stunting features they had in common, which may have also had a negative effect on each field’s prospects for early growth. I briefly make the case for this possibility below, but I am very uncertain about whether the factors I describe actually had much (counterfactual) effect on the growth of either field.

First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work. These advocates successfully garnered substantial media attention, and this seems to have irritated the most relevant established scientific communities (cryobiology and chemistry, respectively), both because many of the established scientists felt that something with no compelling scientific backing was getting more attention than their own “real” work, and because some of them (inaccurately) suspected that media attention for cryonics and MNT had translated into substantial (but unwarranted) funding for both fields.

Second, early advocates of cryonics and MNT spoke and wrote in a way that was critical and dismissive toward the most relevant mainstream scientific fields, and this contributed further to tensions between advocates of cryonics and MNT and the established scientific communities from which they could have most naturally recruited scientific talent and research funding.

Third, and perhaps largely as a result of these first two issues, these “neighboring” established scientific communities (of cryobiologists and chemists) engaged in substantial “boundary work” to keep advocates of cryonics and MNT excluded. For example, in the case of cryonics: according to an historical account by a cryonicist (who may of course be biased), established cryobiologists organized to repeatedly label cryonicists as frauds until cryonicists threatened a lawsuit; they passed over cryonics-associated cryobiologists for promotions within the professional societies, or asked them to resign from those societies, and they also blocked cryonicists from obtaining new society memberships via amendments to the bylaws; they threatened to boycott the only supplier of storage vessels suitable for cryonics, forcing cryonicists to build their own storage vessels; they wrote a letter to the California Board of Funeral Directors and Embalmers urging them to investigate cryonicists and shut them down; and so on.

Throughout all this, the fields of cryonics and MNT were kept afloat (largely by philanthropists), and they’ve had some scientific and advocacy successes, but in general they grew more slowly than many other fields.

Again, I stress that these are merely my rough impressions upon consulting a variety of sources, and even if my impressions are right, it’s not clear whether these “failure modes” had much counterfactual impact on either field.

Neoliberalism

Annotated bibliography for this section

Neoliberalism refers to an intellectual movement which, starting in the 1930s, sought to revive and update a variety of ideas related to 19th century economic liberalism in the face of competition from rival approaches that placed greater emphasis on central planning, such as communism or Franklin D. Roosevelt’s New Deal. Neoliberals typically advocate free trade, deregulation, privatization, monetarism, individual liberty, and limited government, and is associated with the writings and policies of Friedrich Hayek, Milton Friedman, Margaret Thatcher, Ronald Reagan, and many others.

Simplifying greatly, one might say that neoliberalism began as a small gathering of “underdog” intellectuals in 1938, but by 1980 had become the dominant economic worldview in the U.S. and the U.K., along with several other countries. It remains a leading economic ideology today, especially in the United States. For good or ill (I take no position on that here), neoliberalism is one of the most successful intellectual movements of the 20th century.

How did neoliberalism succeed? One common story goes something like this:

In 1938, 26 scholars and thinkers — several of them future early leaders in neoliberalism — gathered to discuss a recent book by Walter Lippmann. They discussed how to develop a “third way” between 19th century laissez-faire liberalism and socialism (both of which the participants saw as problematic), and Alexander Rüstow coined the term “neoliberalism” for this project. They started an organization to promote their ideas. Its work was interrupted by World War II, but nevertheless it inspired Hayek to create the Mont Pèlerin Society in 1947.

Hayek published The Road to Serfdom in 1944, which focused on the moral case for an updated form of laissez-faire economic liberalism. In 1945, an abridged version was published in Reader’s Digest, and reached a much wider audience.

Around this time, Hayek developed and argued for a particular strategy for disseminating neoliberal ideas. His strategy focused on the long-term cultivation of neoliberal-friendly academics and “intellectuals” (think tank analysts, journalists, entrepreneurs, etc.) rather than short-term political aims, and encouraged utopian thinking about what neoliberal ideas could accomplish. Through the later Mont Pèlerin Society and other venues, Hayek convinced many neoliberal thinkers and funders to pursue this strategy.

Though the mid-40s through the 60s was arguably the zenith of New Deal progressivism, British social democracy, and neo-Keynesianism, the neoliberal movement cohered during this time and developed its ideas, so that it was poised to have an impact if the right opportunities came along.

In the 1970s, stagflation struck and the Bretton Woods monetary system collapsed, events which many interpreted as symptoms of a failure of Keynesianism. By this time, neoliberalism was a ready alternative, and it had gained many adherents among academics, intellectuals, and policy-makers. A rapid series of neoliberal victories followed: the “Chicago Boys” remade the Chilean economy according to neoliberal ideas and prompted several other Latin American countries to follow their example, Paul Volcker was made Chairman of the Fed in the U.S., Thatcher and Reagan were elected as heads of state in the U.K. and the U.S., and Hayek and Friedman won Nobel prizes.

Since then, neoliberalism has continued to be a highly influential (and perhaps “dominant”) politico-economic ideology. Though neoliberalism is often blamed for the 2008 financial crisis, no competing system was poised to take its place, so neoliberalism continues to be hugely influential.

How accurate is this story? To find out, I consulted several histories of neoliberalism, from authors both supportive and critical of the movement and its ideas (see below). From skimming these histories, my current impressions are that:

The basic story presented above seems roughly accurate, though it leaves out some important factors (see below), and it may overplay just how dominant neoliberal ideas really are (e.g. see Ben-Ami 2015).

Of course, the story above greatly oversimplifies the history, but it also under-emphasizes the very large role played by sheer luck, as emphasized by e.g. Jones (2012).



Most important of all, the story above under-emphasizes the role of big business. Indeed, the single biggest reason for neoliberalism’s success may be that neoliberal policy recommendations can be expected to disproportionately benefit (and flatter) big business and wealthy individuals, who of course are in an unusually good position to advance the spread of neoliberal ideas and policies, and have done so quite aggressively since especially the 1970s, both directly and through their charitable foundations. (E.g. see Harvey 2005; Kotz 2015; Skocpol & Hertel-Fernandez 2016.)

If this last point is right, then the neoliberal playbook (for growing a field) might not work as well if it is applied to other nascent intellectual movements, assuming those other movements are not built on ideas that happen to predictably benefit a constituency as well-resourced and powerful as the constituency most predictably benefited by neoliberal ideas.

The conservative legal movement

Annotated bibliography for this section

Teles’ history

The definitive account of the conservative legal movement (CLM) — including law and economics — is Steven Teles’ The Rise of the Conservative Legal Movement (2008). Teles’ basic account is as follows:

The 1930s-60s saw the gradual development of the “liberal legal network” (LLN), the “collection of individuals and organizations in the legal profession, law schools, and public interest law groups that formed… the ‘support structure’ for the rights revolution… [and which thereafter] protected and extended liberal accomplishments in the law, even when the electoral coalition that had originally supported them began to wither.” Some key factors in this development were: The New Deal created an explosion in demand for (and supply of) liberal-leaning lawyers, who soon flooded government agencies and (often after extensive government service) private law firms, law school faculties, civil rights organizations (e.g. the NAACP and the ACLU), and, a bit later, the American Bar Association (ABA). In the 1950s, the Ford Foundation began to spend large sums on liberal-leaning legal organizations (e.g. the National Legal Aid Association) and on liberal-leaning programs (e.g. fellowships) at leading law schools. Gideon v. Wainwright (1963) mandated that the state provide counsel for defendants who cannot afford to hire their own counsel, thus greatly expanding the provision of legal aid which, for various reasons, tends to attract (and subsidize the career development of) left-leaning lawyers. Law students of the revolutionary 1960s demanded that their education be more relevant to the cause of social justice, contributing (e.g.) to a great expansion in legal clinics which, for a variety of reasons, have typically had caseloads that “have included little to please modern conservatives, and provided a significant source of free labor, training, and recruitment for the [left-leaning] public interest law movement.” More generally, liberal ideology enjoyed something of a “moral monopoly” (especially e.g. in the 1960s), such that the dominant picture of a morally praiseworthy intellectual or lawyer was that of a left-leaning intellectual or lawyer engaged in left-leaning causes. There were other factors, but I don’t survey them here; the rise of the LLN could be its own case study.

Despite the general dominance of the LLN up through the 1980s, law and economics was an early success story for the CLM. (Though many of the subfield’s practitioners lean left and/or have been motivated by a non-ideological desire to make law more empirical, the subfield nevertheless leans right.) Law and economics got its start at Chicago University in the 1940s and 50s, largely via some early neoliberal scholars and some others heavily influenced by them. In some cases, the connection was quite direct: e.g. one of the founders of law and economics, Aaron Director, was brought to Chicago via funding from the Volker Fund arranged by Hayek himself. Once the subfield had enough momentum to “get off the ground,” there was abundant low-hanging fruit to pick. After all, the economic analysis of law is quite a “natural” and important idea. Douglas Baird, who eventually became dean of the University of Chicago Law School, recalled that:

…people like Posner would come in and spend six weeks studying family law, and they’d write a couple of articles explaining why everything everyone was saying in family law was 100 percent wrong. And then the replies would be, “No, we were only 80 percent wrong.” And Posner never got things exactly right, but he always turned everything upside down, and people talked about law differently… doing great work was easy… I used to say that this was just like knocking over Coke bottles with a baseball bat… You could just go in and write something revolutionary and go in tomorrow and write another article. I remember writing articles where the time between getting the idea and getting it accepted from a major law review was four days. [When I took a look at bankruptcy law] I got tenure by saying, “Jeez, a dollar today is worth more than a dollar tomorrow.” You got tenure for that! The reality is that there was just an open field begging for people to do great work. Much of the field’s early success owes itself to the academic entrepreneurship and fundraising persistence of a single person: Henry Manne. Manne organized, and raised funding for, (1) a highly influential program of multi-week seminars providing economics training to law professors, (2) the first law-school center for law and economics (initially at Miami, then moved to Emory and eventually George Mason University), (3) an economics training program for federal judges, (4) a fellowships program (the Olin Fellows) supporting economics students to obtain a law degree, (5) a variety of topical conferences that boosted concentrated economic analysis of particular legal topics and helped those interested in law and economics to network with each other, (6) a program of seminars training economists in the law, (7) the first law school (George Mason) whose curriculum was focused on law and economics, and more. Not all Manne’s efforts were successful, but several of them were extremely successful. For example, 16 of the 33 Olin fellows got positions in academia (mostly in law schools), and by 1990, 40% of federal judges had attended one of Manne’s law and economics training seminars, including Supreme Court justices Ginsburg and Thomas, plus 67 judges in the federal courts of appeals. As a result of these and other developments, law and economics was “becoming part of the mainstream of academic law” by the 1980s, but it was not yet institutionalized beyond a few schools. Inspired by Manne’s law and economics center in Miami, the John M. Olin Foundation began in the early 80s to fund programs (and sometimes entire centers) at the country’s elite law schools. Once law and economics was established at (e.g.) Chicago, Yale, and Harvard, this created pressure for other schools to “keep up with the Joneses.” Olin-funded programs and centers soon followed at Penn, Stanford, Berkeley, Virginia, Columbia, Duke, Georgetown, and Toronto — all between 1986-1989. From 1985-1989, the Olin Foundation and other conservative foundations contributed (at least) $4.45 million (over $9 million in 2017 dollars) to build up law and economics. The Olin Foundation continued to lead philanthropic funding for law and economics for more than a decade. Teles sums up: “measured in terms of the penetration of its adherents in the legal academy, law and economics is the most successful intellectual movement in the law of the past thirty years, having rapidly moved from insurgency to hegemony.”

Other conservative efforts to counter-mobilize against this growing dominance of the LLN initially fared much worse than law and economics did. For example: Initial conservative public interest law firms were organized geographically, which caused several problems. First, they specialized in geographic regions, rather than specializing in issues or ideological principles. Second, geographic specialization naturally led to close ties with local businessmen, but those businessmen generally abandoned their declared conservative principles any time they conflicted with their bottom line, for example when various forms of state activism would privilege their own businesses. Third, the locus of political power had transitioned from the states to the national stage. Initial conservative counter-mobilization to to LLN was generally reactive. Thus, conservatives were in many cases able to slow down the advance of liberal causes, but they had not yet developed much of a detailed, positive, ideological agenda to push through.

Eventually, the rest of the CLM learned from its early mistakes and began to experience large successes akin to those seen in law and economics. Many of the CLM’s early mistakes (outside law and economics) were diagnosed in a report prepared in 1980 by Michael Horowitz for the Scaife Foundation, which was subsequently distributed to many conservative donors and activists. Horowitz argued, among other things, that (1) the CLM needed to improve its moral reputation and its moral narrative, so that young lawyers could see that “one can be caring, moral, intellectual… while at the same time being radically opposed to the stale views of the left,” that (2) the CLM needed to be organized by functions and issues rather than by region, that (3) most of the CLM needed to be located near Washington D.C., that (4) the CLM needed to become less dependent on businesses, which undermined its ideological coherence and its ability to seize the moral high ground, that (5) the CLM needed to compete with the LLN at the university, not at the corporation, that (6) the CLM needed to focus on the long-term intellectual struggle rather than on short term gains and easily measurable but ineffective activities such as amicus briefs. The “Horowitz Report,” along with some lesser-known reports which came to similar conclusions, convinced many conservative foundations to change their strategy. They pulled back funding for the activities the Horowitz Report criticized, and by the 90s had shifted their funding to organizations implementing the strategies Horowitz recommended, for example the Federalist Society (see below) and a second generation of conservative public-interest law firms whose approaches were informed by the Horowitz Report (see below). Alienated from their (mostly liberal) law schools, conservative legal students organized the Federalist Society in the early 80s. The Society organizes conferences, talks, debates, student groups, and other activities. According to Teles, its functional role is fourfold:

First, it engages in recruitment of law students and practicing attorneys who can identify with and participate in the [CLM]. Second, it invests in the human capital of members through frequent debates, which acquaint them with conservative legal ideas and heighten their intellectual self-confidence, and through their participation in its student, lawyer, and practice groups, which provide leadership experience. Third, the Society produces cultural capital, in that its activities facilitate the orderly development of conservative legal ideas and their injection into the legal mainstream, reducing the stigma associated with those ideas in institutions that produce and transmit professional distinction. Fourth, and perhaps most importantly, the Society is a producer of social capital in the form of [social] networks that develop as by-products of Society activities. In the absence of an organization like the Federalist Society, these movement public goods would be produced in a haphazard, uncoordinated, and redundant fashion, if produced at all. Organizational entrepreneurs would have seen their transaction costs escalate significantly, to the point where some activities would not have been worth pursuing. I won’t try to list the Federalist Society’s many successes and impacts here; suffice it to say that it is widely regarded — by conservatives and liberals alike — to have had a massive positive impact on the rise of the CLM. A second generation of conservative public-interest law firms, informed by the Horowitz Report (or by similar findings by others), saw greater success than the first generation. For example, “since their founding in 1991 and 1989, respectively, both the Institute for Justice and the Center for Individual Rights have established impressive track records of placing [and sometimes winning] significant cases before the Supreme Court… Despite the millions of dollars that conservative patrons invested in first-generation firms, none of them came close to this record of winning important, precedent-setting cases.”



Lessons

If Teles’ account is roughly correct, what lessons can be learned from the rise of the CLM? Teles’ takeaways are:

The most serious mistake those seeking to learn from legal conservatives could make would be to create carbon copies of conservatives’ organizational apparatus, mimicking rather than learning. The most successful conservative projects, such as the Federalist Society, were adaptations to specific weaknesses of the conservative movement and responses to the character of liberal entrenchment… The success of the Federalist Society, however, does not mean that it can be cloned, for actors today face a very different set of challenges than conservatives did… That does not mean that there are no lessons of general applicability from the conservative organizational mobilization in the law. The first is the need for honesty. Conservatives were willing to face, at times brutally, the ideational and organizational weaknesses of the movement. The Horowitz Report, for example, was a major turning point for conservatives because it laid bare the manifest inadequacies of the movement, criticizing almost the entirety of the conservative infrastructure in the law… The conservative experience also suggests that little significant change is likely to come from existing organizations or leaders… Change came instead from new organizations, and their predecessors only changed much later, if at all. The history of the conservative legal movement suggests that successful political patrons engage in spread betting combined with feedback and learning, rather than expecting too much from grand planning. Conservatives’ learning and feedback did not, however, involve using narrow, technical forms of evaluation. Conservative patrons were willing to accept fairly diffuse, hard-to-measure goals with long-term payoffs when they had faith in the individuals behind the projects. This goes against the grain of much of contemporary philanthropy, which emphasizes rigorous, usually quantitative, evaluative measurement. Conservative patrons were typically quite close to the entrepreneurs they funded and depended on their own subjective evaluation of both a given entrepreneur’s effectiveness and the information that flowed through trusted movement networks— rather than on “objective” measures of outcomes. Where goals such as transforming the climate of opinion are concerned, this form of subjective evaluation may be more effective than seemingly precise measures that often leave out the most important, albeit difficult-to-measure, outcomes. Legal conservatives did not achieve as much as they have simply by more effectively packaging or marketing their ideas. Instead, conservatives became more effective by challenging, and ultimately changing, their ideas. Decades of debate in Federalist Society conferences and within the network of conservative scholars led to jettisoning the concepts of judicial restraint and strict constructionism, and then original intent, before finally settling (at least provisionally) on “original meaning.” …The conservative legal movement took ideas very seriously, and its patrons invested significant resources in serious, first-order discussion of fundamental commitments with little if any short-term payoffs. While many contemporary liberals seem obsessed with creating their own think tanks to allow for “instant response,” conservatives recognized the need to go back to “first things.” …Perhaps one of the most common mistakes that have been made by those who have attempted to learn from the conservative legal movement has been the tendency to confuse direct organizational goals and the desired by-products of activities with other ends. The Manne programs in the 1970s and 1980s and the lectures and conventions of the Federalist Society, for example, contributed mightily to the development of academic and professional networks. These networks spurred intellectual productivity, improved the information that conservatives could access in government, and assisted in identifying ideological sympathizers when staffing the federal judiciary and administrative agencies. As important as these outputs were, however, they were by-products, or external benefits, of activities and organizations that worked because they were not aimed directly at these goals. Professors and judges attended Manne’s seminars because they were deeply intellectually stimulating, and, despite the unquestioned presence of opportunists within its ranks, such stimulation remains the main force drawing lawyers and law students to Federalist Society meetings. The final lesson to be drawn from the conservative legal experience …[is this]: In the short term, politics is, in fact, a world of constraints, but to agents willing to wait for effects that may not emerge for decades, the world is rich with opportunity. Activists would do well to learn from, and act upon, these examples of long-term effects.

After reading Teles’ book, I consulted a few other sources (see below). In general, they seem to agree with Teles’ basic account. Moreover, my subjective impression of Teles’ book was that it is an unusually impressive piece of historical scholarship. Indeed, of the >50 book-length histories I read or skimmed for this project, it might be the most impressive. For these reasons, I think Teles’ basic account of the history of the CLM, and what lessons can most reasonably be drawn from it, are a reasonable best guess for what an outsider like myself should think, given that I wanted to invest <10 hours studying the history of the field.

The role of philanthropy

Funding for the CLM seems to have come from a mix of philanthropic and corporate sources, with the ratio between the two changing over time, and the role of corporate funding seemingly larger than has been the case for the LLN. It has also been fairly concentrated, relative to e.g. the funding for environmentalism.

Funding for the CLM has also been a mix of “passive” and “active” funding (see this blog post). For example, much of the funding that Henry Manne raised was passive, but this sometimes led to later active funding — most notably with the Olin Foundation. The Olin Foundation learned about law and economics from Henry Manne, did some reading about it, and then became by far the field’s largest funder for multiple decades, and became very active and strategic in its approach, eventually coordinating its spending with other major philanthropists to maximize their collective impact. Certainly, the foundations influenced by the Horowitz Report pursued an active and strategic funding approach during certain periods.

American geriatrics and the John A. Hartford Foundation

Annotated bibliography for this section

By the early 1980s, geriatric medicine had developed into a medical specialty with its own journals, textbooks, and professional societies, but it was still fairly small (in the U.S.) relative to the size of the aging “baby boomer” generation. Recognizing this, the John A. Hartford Foundation decided to pull out of other areas and focus almost exclusively on building up the field of geriatrics.

The Hartford Foundation launched an ambitious program that funded geriatrics training and research among medical doctors, nurses, and social workers. This included funding for new centers, doctoral candidates and post-doctoral researchers, training for practitioners and for the trainers themselves, geriatrics research, curriculum development, awards, and the direct improvement of health care services available to the elderly. The Foundation also worked with grantees to get licensing agencies to include geriatric questions on medical licensing exams, which created additional incentive for educational institutions to improve their coverage of geriatrics.

As of 2012, the Hartford Foundation had made 560 geriatrics-related grants, worth $451 million, and it is widely regarded as one of the most important actors in growing the field of geriatrics in the United States, perhaps second only to the Veterans Administration. There is little doubt that American geriatrics would have grown much more slowly without the Hartford Foundation.

Overall, my sense of the Hartford Foundation’s geriatrics efforts — from the few sources I consulted; see below — is that they funded most of the things one would “naturally” be tempted to fund when trying to quickly scale up a field like geriatrics, and many of the things they funded worked quite well, and some of the things they funded had disappointing results, and in general their strategy made sense at a high level and basically “worked,” though it’s somewhat hard to judge the “efficiency” of the money spent. In all these senses, the Hartford Foundation’s geriatrics work is a prototypical example of what I observed when studying several other cases of early field growth (that I don’t summarize here).

American environmentalism

Annotated bibliography for this section

As most histories tell the story (see below), the modern American environmental movement coalesced, in the 1960s and 70s, from a variety of old and new concerns and activities — especially, from the conservation movement and anti-pollution activism, but also from newer concerns such as fear of nuclear contamination and resource depletion.

Despite these earlier sources, it seems fair to say that environmental concern and activism “exploded” in the 60s and 70s. For example:

Opinion polls show a dramatic rise in concern for the environment in the last 1960s. E.g. in a Gallup poll of which national problems should receive more government attention, the percentage of respondents selecting “reducing pollution of air and water” rose from 17% to 53% during 1965-1970, and its rank among the 17% who selected it in 1965 was 9th (of 10) whereas its rank among the 53% who selected it in 1970 was 2nd.

“In 1969… only two full-time lobbyists served the environmental movement. By 1975, the [leading] twelve organizations… employed 40 lobbyists; a decade later the number of environmental lobbyists had swelled to 88…”

Environmental concerns were barely “on the radar” in America in 1960, but by 1970 public interest had swelled such that the inaugural Earth Day (Apr. 22, 1970) involved 20 million participants (10% of the nation) at ~12,000 locally-organized events across the country — much larger than any nationwide event from the civil rights or anti-war activities of the time.

There were a few important cases of environmental legislation before 1960, but most of the key American environmental legislation was passed in the 60s and 70s: the Clean Water Act (1960), the Partial Nuclear Test Ban Treaty (1963), the Wilderness Act (1964), the Water Quality Act (1965), the Solid Waste Disposal Act (1965), the Occupational Safety and Health Act (1970), the Water Pollution Control Act (1972), the Endangered Species Act (1973), the Toxic Substances Control Act (1976), and especially the National Environmental Policy Act (NEPA, 1969), among others. The Environmental Protection Agency was established in 1970.

About half of the 30 leading national environmental organizations studied in Bosso (2005) were founded between 1967 and 1973.

Of the 5 largest environmental organizations in existence in 1950, 3 experienced explosive membership growth during the 60s and 70s (one other stayed steady, and another grew modestly). Between 1960 and 1985, the Wilderness Society grew from 10,000 to 52,000 members (5x), the National Audubon Society grew from 32,000 to 400,000 members (12.5x), and the Sierra Club grew from 16,500 to 246,000 members (15x).

What role did philanthropy play? According to Bosso (2005), “Foundations provided critical seed capital to virtually every organization created during environmentalism’s formative years in the 1960s and early 1970s, and several depended heavily on foundation support until they were compelled to diversify…” Philanthropists also funded several other landmark environmental activities, for example Rachel Carson’s Silent Spring. That said, the formation and early growth of the environmental movement seems to have been more grassroots-driven than philanthropist-driven, at least relative to most of the other case studies I looked at (including those not discussed here). For example, major philanthropists seem to have assisted a negligible portion of the ~12,000 local events that comprised the first Earth Day, and the largest and most influential environmental organizations get most of their support individuals and small-scale philanthropists rather than from major foundations. (The opposite is true for, say, neoliberalism and American geriatrics, which have been mostly funded by a relatively small number of actors.)

As a result, the success of the environmental movement seems unlikely to be explained by any particular “strategy” executed by a small number of actors (as is the case for neoliberalism, American geriatrics, and several other cases). Environmentalism mostly doesn’t seem to be a case of “one or more major philanthropists wanted to grow the field, so they funded most of the things one would ‘naturally’ fund to grow a field of this sort, and those things often but not always worked.” There was some of that, but less than in the case of (say) neoliberalism or American geriatrics.

In other words, the success of environmentalism seems to have been relatively “organic” and even accidental. This seems largely true for the sudden growth of environmental concern in the United States, but it also seems true about many of environmentalism’s specific accomplishments.

For example, consider what is in retrospect (and by its own lights) one of the movement’s greatest successes: NEPA’s requirement that all executive federal agencies perform environmental assessments in advance of major projects and, if warranted by the project, prepare an environmental impact statement (EIS). NEPA itself was the work of senator Henry Jackson, a long-time conservationist who came up with the bill in 1968. The major environmental organizations appear to have played basically no role in its development or passage, and they didn’t seem to think it was a big deal even immediately after it passed. NEPA’s language about environmental impact statements was quite vague, and nearly didn’t make the final version of the bill. Environmentalists eventually noticed that the bill implied they could sue agencies when they failed to produce EISs or when the EISs they did produce were deemed (by environmentalists) to be inaccurate or insufficient. Then, in another stroke of luck for environmentalists, the initial precedent-setting court rulings tended to interpret NEPA’s vague language about EISs to be very broadly applicable, to include both direct and indirect impacts, and in general to require fairly extensive research into potential environmental impacts. Since then, NEPA-enabled lawsuits have been one of the environmental movement’s most important tools in their fight for environmental protection.

The success of the environmental movement contrasts with the success of neoliberalism is another way. Whereas much of neoliberalism’s success may be explained by the fact that neoliberal policies disproportionately benefit big businesses and their wealthy owners, who have been in a position to provide neoliberalism with massive funding, it is difficult to tell a similar story about environmentalism. If anything, environmentalist activities tend to obstruct the projects of big businesses and their wealthy owners. Indeed, the benefits of environmentalist activities tend to be quite diffuse, uncertain, and geographically and temporally distant. Moreover, some environmentalist activities have virtually no (material) benefits to any political actors, as they are instead aimed at benefiting animals and ecosystems. Given this, the success of environmentalism is especially surprising, in a way that (e.g.) the success of neoliberalism is perhaps not as surprising.

Animal advocacy

Annotated bibliography for this section

Advocacy and activism on behalf of the welfare, rights, and interests of non-human animals — what I’ll call “animal advocacy” — has ancient roots in a variety of religious and philosophical traditions. Here, I focus instead on the “modern” animal advocacy movement that arose in Europe and the United States during the 19th century, with a special focus on farm animal welfare advocacy in the United States since the 1950s. (Farm animal welfare is a focus area of ours.)

I consulted several sources on the history of animal advocacy (see below); my impressions are below.

Before 1950

As far as I know, no organization specifically aimed at protecting or promoting animal welfare existed in any country before the 1800s, though there were occasional writings published, and laws passed, against animal cruelty.

In 1809, a Society for the Suppression and Prevention of Wanton Cruelty to Animals was founded in Liverpool, but it didn’t last long. In 1824, Reverend Arthur Broome created (in London) the Society for the Prevention of Cruelty to Animals, which in 1840 was granted royal status by Queen Victoria and was thus renamed the Royal Society for the Prevention of Cruelty to Animals (RSPCA). Besides Broome, other founding members included William Wilberforce (also a prominent advocate for abolition and children’s welfare) and Richard Martin (who had helped pass the Cruel Treatment of Cattle Act in 1822). The RSPCA directly inspired the creation of many similar groups in other areas, including in Northern Ireland (1836), Scotland (1839), Ireland (1840), the United States (1866), New Zealand (1882), Australia (1871), and Hong Kong (1903).

In its early days, the RSCPA focused on the treatment of draft and working animals (including e.g. horses pulling carriages and animals induced to fight for public entertainment), but also engaged in some advocacy on behalf of farm animals, and against animal experiments (vivisection).

Inspired by the RSPCA, diplomat Henry Bergh founded the American Society for the Prevention of Cruelty to Animals (ASPCA) in New York in 1866, which initially focused on the welfare of working horses and livestock.

In 1877, 27 animal welfare groups from around the U.S. were invited to a meeting in Cleveland, Ohio, to discuss the mistreatment of farm animals during their transport across the U.S., and these groups combined to from the International Humane Association, later renamed the American Humane Association (AHA). One of the AHA’s first activities was to help enforce an 1873 law preventing animals from being transported for longer than 28 hours without being given 5 hours of rest. Strong enforcement was maintained for a few decades, but then declined.

Around the world, the animal advocacy movement went into decline during the first half of the 20th century — no doubt in part due to the interruptions of two world wars — though of course animal advocates continued to win some victories during this period.

In general, why did the organized animal advocacy movement arise in the 19th century? Given what I’ve read, Ryder (1998)’s answer to this question seems like a reasonable guess (pp. 25-26):

After the Reformation in northern Europe the monasteries declined and good works became increasingly secularized. Rich patrons, often in partnership with the Church, set up schools and other institutions to help the poor and needy… In the late eighteenth century, philanthropic reform flourished in Britain, and a number of new humane organizations were founded, notably the Royal Humane Society in 1774. Criticisms of the slave trade increased, voiced often by the very same poets and essayists who were to attack cruelty to animals… The concern for the suffering of nonhumans was often expressed by leading philanthropists, although it was usually expressed a little later in their careers than was their [concern] for humans. Similarly, the foundation of [the RSPCA] in 1824 was some forty or fifty years after structurally equivalent bodies had been first set up for the purposes of human welfare. Throughout the nineteenth century, humane foundations proliferated; it is wrong, therefore, to assume that the animal welfare movement was anything but a part of this general compassionate movement. To argue that it was the increasing urbanization of the period that gave people a more humane view of animals is to ignore the synchronously increasing concern for the well-being of their own species. Far more convincing, surely, is the view that the new general affluence of the period allowed compassionate people to institutionalize their concerns generally and make them effective. The increasing democracy of this period, too, meant that during the nineteenth century politicians also became involved in the reform movements, motivated sometimes by a sense of Christian duty, sometimes by simple compassion and sometimes by more political considerations.

After 1950

Christine Stevens founded the Animal Welfare Institute (AWI) in 1951, and launched its legislative division — the Society for Animal Protective Legislation (SAPL) — in 1954. These organizations initially focused on animal experimentation, but later broadened their activities. They helped pass the Humane Slaughter Act in 1958, and later helped pass other legislation such as the Animal Welfare Act and the Endangered Species Act. In 2006, AWI launched the Animal Welfare Approved standards program.

In 1954, the Humane Society of the United States (HSUS) split off from the AHA. HSUS also helped to pass the Humane Slaughter Act, among many other activities.

Factory farming was invented in the 1920 but its popularity grew slowly. In 1964, Ruth Harrison published (in the U.K.) the first major exposé on factory farming, Animal Machines. The resulting public outrage led to a commission on the topic headed by zoology professor Roger Brambell. The commission eventually released the “Brambell Report,” which recommended various regulations to protect animal welfare, and proposed the influential “Five Freedoms of Animal Welfare”: (1) freedom from hunger or thirst, (2) freedom from discomfort, (3) freedom from pain, injury, or disease, (4) freedom to express most normal behavior, and (5) freedom from fear and distress.



Philosopher Peter Singer read Animal Machines, and in 1973 wrote a book review of a collection of essays by Harrison and others on animal welfare issues. Singer expanded the ideas in that review into a book published in 1975, Animal Liberation. This book is plausibly the most influential document in modern animal advocacy. It has sold millions of copies, and multiple sources I consulted suggested that most animal activists seem to own a copy. When Open Philanthropy Project Program Officer Lewis Bollard asked ~40 current leaders in the animal welfare movement what had originally influenced them to become involved, more than half of respondents mentioned Animal Liberation specifically.

In New York, Henry Spira encountered Singer’s work and founded Animal Rights International (ARI) in 1974. Initially, ARI focused on opposing animal testing, launching highly successful campaigns against the American Museum of Natural History (which was experimenting on cats) and Revlon (for its testing of cosmetic products on animals). In the 1990s, Spira shifted his focus to factory farming, so that he could have an impact on larger numbers of animals.

In 1972, Ronnie Lee founded the Animal Liberation Front in Britain, a radical coalition of activists who engage in (often illegal) direct action on behalf of animal rights, for example by removing animals from farms and research laboratories, and by destroying facilities.

In 1980, Ingrid Newkirk and Alex Pacheco founded People for the Ethical Treatment of Animals (PETA). PETA initially focused on animal testing and veganism, but in the 1990s shifted more attention to farm animal welfare. PETA has also conducted undercover investigations, and courted celebrity promotion of veganism.

In the 1990s, Spira and PETA collaborated on a campaign against McDonald’s, which resulted in the first major commitments brought by corporate campaigns. Burger King and several other fast food companies soon made similar commitments, which were guided by a 1996 survey of slaughterhouses conducted by Temple Grandin for the USDA. Grandin “recommended design changes for slaughterhouses and stricter inspection stipulations; e.g., finding more than 1% of cattle to be conscious when hoisted up for slaughter (“sensible on the rail”) would be grounds for automatic failure, and companies (e.g. McDonald’s) would stop buying from that farm. This resulted in a very rapid shift (within roughly a year) from a high proportion of animals being incorrectly stunned and slaughtered to general compliance with the 1% regulation. These reforms affected cattle and pig slaughterhouses…”

In 2000, in part due to campaigns against corporations, several major fast food companies began to require larger cages for egg-laying hens, and called for an end to forced starvation molting.

In 2008, an undercover investigation by HSUS captured video of inhumane treatment of “spent” dairy cows at a Hallmark/Westland slaughterhouse. This footage may have gotten more prominent media coverage than any previous undercover investigations. The USDA ruled that Hallmark/Westland had violated its regulations, the plant was closed, and the largest meat recall in U.S. history was launched. Hallmark/Westland was also sued by multiple parties, and went out of business. After this, USDA enforcement actions jumped from about 5/year to 100/year.

HSUS and other organized led some ballot measure campaigns from the 80s onward, but these didn’t focus much on farm animal welfare until the 2000s. In 2008, a ballot measure in California (Prop 2) banning gestation crates and veal crates passed. Prop 2 was also designed to ban battery cages, though this ultimately failed because regulators interpreted it to allow for larger battery cages. Nonetheless, this success had substantial flow-though effects. For example, HSUS next announced its intention to introduce a similar ballot measure in Michigan, and to avoid a costly campaign, Michigan agricultural producers instead lobbied the state to ban battery cages and gestation crates, but on a more lenient timeline than a ballot measure would likely have allowed. HSUS then announced a similar ballot measure in Colorado, prompting the pork industry to lobby the state for a ban on gestation crates. HSUS’s subsequent announcement of plans for a similar ballot measure in Ohio led to a compromise, as the agriculture industry agreed to a ban on gestation crates and veal crates but only put a moratorium on building new battery cages.

In the late 2000s, in response to a variety of pressures, several U.S. meat producers began to pledge to phase out gestation crates and, a bit later, battery cages.

In Europe, some countries have banned veal crates, gestation crates, and battery cages, and many countries have other animal welfare-related regulations. Compared to the United States, relatively more of these developments might have been the work of technocratic officials rather than the work of organized advocacy groups. Of course, many victories in the space continue to be largely the work of Europe’s leading advocacy groups, for example the RSPCA, Compassion in World Farming (CIWF), Eurogroup for Animals,

Outside the U.S. and Europe, there has been relatively little organized animal welfare work until the last decade or so.

Funders

Most animal welfare organizations in history seem to have relied on the support of members and relatively small donors. In other words, animal advocacy has been a mostly grassroots movement. As far as I can tell, large philanthropists have entered the space only recently, including The Price is Right host Bob Barker and others. Some of these philanthropists seem to be engaged in active and strategic field-building.

Annotated bibliography of the key sources I consulted

Bioethics

Cryonics

Molecular nanotechnology

Neoliberalism

Conservative legal movement

American geriatrics

American environmentalism

Animal advocacy

Sources