by Judith Curry

Partisan groups lobbying for preferred outcomes have a long history of the selective use of information to support predetermined conclusions. This is acceptable in politics, but not in science. The motivations for such advocacy science may be a sincere desire to improve the protection of . . ecosystems and frustration with decision-making processes that seem to give too little weight to longer term environmental considerations, or a cynical strategy to exploit the challenges that uncertainty poses to decision-making. Whatever the cause, making science advice itself partisan means it no longer deserves to be treated in any special way in the decision-making process. There is a serious risk that the long-term costs of merging advocacy with science advice would outweigh any short-term benefits of greater impact on a particular decision. If scientists do wish to increase the impact of science advice on decision-making, there are alternatives to advocacy in doing so. These approaches make the advice more amenable to decision-makers, while avoiding turning science advisors into partisan lobbyists.

The above text is from the conclusions of the following paper:

Food for thought: Advocacy science and fisheries decision-making

Jake Rice

Abstract. Science advice is supposed to meet idealistic standards for objectivity, impartiality, and lack of bias. Acknowledging that science advisors are imperfect at meeting those standards, they nonetheless need to strive to produce sound, non-partisan advice, because of the privileged accountability given to science advice in decision-making. When science advisors cease to strive for those ideals and promote advocacy science, such advice loses the right to that privileged position. There are temptations to shape science advice by using information that “strengthens” the conservation case selectively. Giving in to such temptation, however, dooms the advice; science advice becomes viewed as expressions of the biases of those who provide it rather than reflecting the information on which the advice is based. Everyone, including the ecosystems, loses. There are ways to increase the impact of science advice on decision-making that do not involve perverting science advice into advocacy: peer review by diverse experts, integrating advice on ecological, economic, and social information and outcomes, and focusing advisory approaches on risks, costs, and trade-off of different types of management error. These approaches allow the science experts to be active, informed participants in the governance processes to aid sound decision-making, not to press for preselected outcomes. Everyone, including the ecosystems, wins.

Published in the ICES Journal of Marine Science, [link] to abstract. The paper is behind paywall, here are some excerpts:

From the “Introduction”:

Many papers have stressed the importance of separating policy advocacy from science advice. Nonetheless, concern over the boundary between science and advocacy seems pervasive, and debate on how science should inform policy continues in many fields, for instance in climate change, health, and food safety. Advocacy science is a subtly nuanced issue. Society benefits from well-informed experts participating in public dialogue on policy issues, and providing information on how consistent policy alternatives are with the scientific information in their area of expertise. However, when those experts place their desired policy outcomes ahead of the basic principles of sound, objective science, an important boundary is crossed. Not only are the benefits reduced, but public dialogue actually suffers because the factual basis of the dialogue is distorted.

When seemingly well intentioned experts with excellent credentials give policy-makers contrasting advice on the same issue, the science becomes part of the policy debate rather than providing a unifying foundation on which well-informed policy debate can take place. In those circumstances, it is necessary to tease apart how much of the disagreement among experts is attributable to uncertainties in the scientific and technical information itself, and how much to differences in the risk tolerances of various experts, tolerances that are often unstated and applied subjectively because the risks are difficult to quantify. This differentiation is important because the first source of potential disagreement among experts is within the domain of sound science, whereas the second is the domain of policy.

From “The privileged role of science advice”:

Science is special because of how it conducts studies and seeks answers. The principles of empiricism, objectivity, falsifiability, and unbiased interpretation of results are the heart of sound science. Critics from the social sciences correctly point out that science is practiced by humans who individually may be imperfect in adhering to these principles. However, that is not an excuse to abandon those principles. Rather, it is the rationale for challenge-format peer review, with reviewers drawn from as wide a range of appropriate perspectives as possible.

The need to address uncertainty poses a challenge for science advisors. While adequately communicating uncertainty, advisors need to keep their messages clear and simple. Clarity cannot be overdone, but simplicity can be.

Even such modern frameworks may be challenged to represent complexities that arise when individually sound studies produce contrasting results. Care must be taken to avoid interpreting such situations as if one of the multiple formulations is correct but that knowledge is inadequate to determine which one. Rather, any of several formulations of a process may be correct in a particular context, depending on externalities or pure chance. It is this type of uncertainty where simplification risks becoming bias.

Biasing the science inputs to the policy dialogue to favour studies reporting a particular outcome takes the application of precaution away from decision-makers and embeds it inappropriately in the expert advisory processes, so making supposedly rigorous decision rules produce the outcomes predetermined by the biases in selecting the information on which decision rules depend.

From “The sources and dangers of advocacy biases in science”:

The frustration many experts feel about fisheries decision-making is understandable. The track record of necessary conservation measures being deferred or diluted is well documented, consistent with assertions that industry interests exploit scientific uncertainty for their partisan goals, and that decision-makers give more weight to short-term outcomes than to longer term consequences.

Not being invincible, its privileged role should not be measured by how often science advice dictates the outcomes of complex decisions, but by how it is reflected in the accountability of decision-makers when their decisions go counter to science advice. If the decision-maker chooses options that are inconsistent with science advice, however, it is the wisdom and judgement of the decision-maker that is questioned.

Despite its privileged position, science advice often does not dominate the decision-making process. Moreover, if the advice realistically covered the diversity of results relevant to complex issues, articulate decision-makers may explain a wide range of decisions made for political ends as consistent with the science advice. This could increase the science advisors’ frustration, and again increase the temptation to practice advocacy science: illustrating the advice only with those case histories and analyses that would lead to the preferred outcomes, and downplaying evidence contrary to the outcome they want from the decisionmaking process.

However, this type of strengthening the science advice makes it no different from any other advocacy document that the decisionmaker received. Each competing interest group has done its best to sift through the scientific evidence of the portion that supports their preferred outcome. When science advisors also take that strategy, there is no higher accountability to adhere to the (now biased) science advice than there is for any other document.

Moreover, once an advocacy science strategy is adopted, to win against advocates of other options, the science advisors need to play partisan tactics better than their competitors. Advocates of competing views are not bound by expectations of balance and objectivity in their arguments, and are often experienced lobbyists.

Hence, when science advisors adopt partisan tactics, to be effective they must increasingly bias the advice, further distancing it from the principles of sound science. Eventually, science advice on high-profile issues will be scrutinized by partisans on all sides, and the lack of balance and objectivity will be discovered and publicized. As this happens, the special attention that science advice gets in decision-making becomes compromised, with lasting consequences.

From “Other options”:

The first step is to make the science advice more inclusive of the range of considerations that are relevant to the decision. Policy-makers have requested more integrated advice for more than a decade , and frameworks for doing so exist, as do processes for multi-criterion decisionmaking

If the advice on ecological, social, and economic outcomes is provided piecemeal, then the decision-makers themselves have to interconnect the consequences of each option without the benefits of a structured framework. Laying out the complete set of outcomes associated with the options available does not degrade the quality of information on any of the individual dimensions of the decision. Rather, it adds value by showing what tradeoffs have to be made socially and economically if the ecologically optimal decision is taken, and what costs have to be paid ecologically for status quo or increased social and economic benefits to be taken.

However, integrated advice at least makes the trade-offs transparent and allows public debate about the major dimensions of the decision in a single science-based framework. Making science advice more integrated across the major dimensions of a policy decision produces at least two benefits. First, it encourages the science advisors to explore a wider range of policy alternatives in developing the advice, because the shortcomings of individual options may be more apparent. Second, advisors may cease to focus on determining the optimal outcome on a single dimension, and identify the options that produce acceptable outcomes on all of them.

JC note: the IPCC First Assessment report arguably did this. Advocacy kicked in for the later reports.

The other step that can be taken is to present advice using approaches designed specifically for decision support rather than hypothesis testing. Particularly in complex ecological systems that may not have deterministic outcomes for a given set of conditions , the notion that one hypothesis is true and the alternatives false is not a particularly helpful basis for policy advice.

This differentiation of types of management error helps decision-makers in two circumstances. One is when the two types of error have different costs. Failing to protect critical habitat (a miss) may have lasting impacts on stock productivity (a high cost), whereas if ample fishing opportunities exist outside an area of concern, prohibiting fishing in it unnecessarily (a false alarm) may at worst result in a small increase in travel time to open fishing grounds (a low cost). On the other hand, closing a fishery based on a single low-stock status indicator that turns out to reflect a change in distribution rather than an abundance of the stock (a false alarm) may cause great hardship to dependent communities (a high cost), whereas a modest reduction in quota while gathering more information about actual stock status to use in the next assessment (a miss) may have little lasting impact on stock dynamics as long as the additional information really is gathered and used (a low cost). These examples illustrate that the costs of misses and false alarms are case specific, and indeed part of what decision-makers should consider. Such frameworks also help decision-makers deal with partisan issues where different sectors of society have different tolerances for misses and false alarms.

Moving discussion between the two interest groups from accusations of extreme outcomes to discussion of trade-offs between misses and false alarms led to a more constructive exchange of views, and gave decision-makers a less-partisan context in which to explain their decisions. There is no guarantee that signal-detection-type frameworks will always result in constructive dialogue between groups with strongly contrasting risk tolerances, but it is at least a basis for dialogue where the potential benefits and shortcomings of all options are explicit in non-judgemental language.

JC comment: I found this paper to be extremely insightful, with obvious implications for the climate debate. This second paper explicitly examines the climate debate.

Scientific Misconduct: The Perversion of Scientific Evidence for Policy Advocacy

George Avery

Abstract. Science is increasingly being manipulated by those who try to use it to justify political choices based on their ethical preferences, and who are willing to act to suppress evidence of conflict between those preferences and the underlying reality. This problem is clearly seen in two policy domains, healthcare and climate policy.

In the area of climate policy, recent revelations of emails from the government- sponsored Climate Research Unit at the University of East Anglia reveal a pattern of data suppression, manipulation of results, and efforts to intimidate journal editors to suppress contradictory studies and indicate that scientific misconduct has been used intentionally to manipulate a social consensus to support the researchers’ advocacy of addressing a problem that may or may not exist.

In healthcare policy, critics have long worried about the inordinate influence of pharmaceutical and medical device manufacturers on research to show the safety and viability of new products. Recent information, however, shows that government agencies may cause more problems in this area, a worrisome development considering that legislation currently before the U.S. Senate would allow federal agencies to punish organizations whose researchers publish results that conflict with what the agency feels is appropriate.

That bill allows the withholding of funding to an institution where a researcher publishes findings not “within the bounds of and entirely consistent with the evidence,” a vague authorization that creates a tremendous tool that can be used to ensure self-censorship and conformity with bureaucratic preferences. As the research group Academy Health notes, “Such language to restrict scientific freedom is unprecedented and likely unconstitutional.”

Citation: Avery, George H. (2010) “Scientific Misconduct: The Perversion of Scientific Evidence for Policy Advocacy,” World Medical & Health Policy: Vol. 2: Iss. 4, Article 3. DOI: 10.2202/1948-4682.1132

Available at: http://www.psocommons.org/wmhp/vol2/iss4/art3 (Full article is behind paywall).

From the Conclusion:

These cases highlight the temptations toward manipulation of scientific data to build support for favored political and economic outcomes. The purpose of systematic testing and evaluation of ideas, which we describe as “science,” is to allow us to differentiate between what Hayek refers to as “facts” and “appearances.” As Kuhn notes in his canonical work, The Structure of a Scientific Revolution, paradigms in science should logically change when a new model produces enough strong arguments in its favor, through persuasive argument, to convince the field that it has greater utility than a previous framework (Hayek 1952). Properly used, science in this process evolves and gives us an objective means to evaluate the cause of problems and the potential impact of proposed policy interventions, which gives us a basis to evaluate proposals based on an informed evaluation in the context of moral and ethical values.

What is happening appears to be a political revolution in science, where efforts are made to subvert institutions of science such as open inquiry, peer review, and objectiveness in order to change the political environment. When we abandon the values and practices of science, or pervert them to support a predetermined agenda, we elevate “appearances” and subordinate “facts.” Abandoning the objectivity of science to suppress evidence that does not favor the preferences of the censor undercuts the ability of the polity to make rational decisions. Such censorship is inconsistent with democratic ideals in that it denies venues for legitimate exchange of ideas through open debate.

Equally important from the perspective of science is that it undermines the credibility that is derived from the scientific traditions that promote dispassionate objectivity. Just as individual violations of the ethical rules of science can undermine the credibility of the researcher, systematic assaults on these institutions can undermine the credibility of whole disciplines, even the credibility of the basic objectivity of science. In the areas of public health, warnings are already being issued that the actions and rhetoric of the community are already undermining confidence in public health programs.

While private misconduct is threatening enough, the growing practice of governmental and quasigovernmental censorship of scientific data may be even more frightening. Private censorship can be limited if a diversity of outlets exist for communications. Private organizations lack the coercive power of government, and no private organization—even large and wealthy corporations in the energy or pharmaceutical industries—possesses the power and resources of governments.

Furthermore, a fundamental duty of a democratic regime is to ensure the conditions for open exchange of information and informed participation of citizens in governance. Just as science uses ethical standards to promote the credibility and legitimacy of knowledge, the credibility and legitimacy of a democratic state depend on trust in the state to fulfill its duty to act in an ethical manner and maintain sufficient openness that informed participation is possible. Violation of the letter and spirit of that duty undercuts the social contract that is the foundation for the legitimacy of the democratic state. Democracy depends not on the preferences of elites, but rather on a functional marketplace for ideas and vigorous debate between contending viewpoints.

There have been two published rebuttals to Avery’s paper. One is by Trevor Davies of the University of East Anglia. Avery has responded. The rebuttals and responses are all behind paywall.

JC comment: Both of these papers are insightful and hard hitting. The field of climate science needs to look in the mirror, I’m afraid that many would see what Rice and Avery are warning against. I would like to thank Jeroen van der Sluijs (father of the uncertainty monster) for sending me these papers.