H-Diplo | ISSF Partnership

A production of H-Diplo with the journals Security Studies, International Security, Journal of Strategic Studies, and the International Studies Association’s Security Studies Section (ISSS).

http://issforum.org

H-Diplo/ISSF Editors: Thomas Maddux and Diane Labrosse

H-Diplo/ISSF Web and Production Editor: George Fujii

Commissioned for H-Diplo/ISSF by Thomas Maddux

H-Diplo | ISSF Article Review 55

“Process Tracing: A Symposium.” Security Studies 24:2 (April-June 2015): 200-250. DOI: 10.1080/09636412.2015.1036580. http://dx.doi.org/10.1080/09636412.2015.1036580

Review by Keren Yarhi-Milo, Princeton University[1]

Published by ISSF on 20 May 2016

tiny.cc/ISSF-AR55

https://issforum.org/articlereviews/55-process-tracing

https://issforum.org/ISSF/PDF/ISSF-AR55.pdf

For scholars engaging in qualitative analysis, the concept of ‘process tracing’ comes in many shapes and forms. In its most basic form, process tracing refers to a set of procedures that uses qualitative evidence in an attempt to establish a causal relationship between one or more explanatory variables and a dependent variable. Notwithstanding the common use of this conventional term in scholarly work in international relations, we still lack conceptual clarity about what ‘good process tracing’ genuinely entails, how to utilize it best in qualitative research, and what its limitations are. This is unfortunate given that one of the advantages of case studies is their potential for illuminating causal mechanisms. Process tracing, in particular, is one of the most useful procedures available to test causality using qualitative evidence. For exactly these reasons, this symposium is of significant importance both for scholars who already believe (mistakenly or not) that the evidence they present amounts to ‘process tracing,’ as well as those who wish to learn how to use this important technique in their future qualitative work.[2]

This symposium does not propose a list of best practices. Rather, it is organized thematically, as each of the four articles in this symposium tackles the question of how to conduct good process tracing from a different angle, highlighting different challenges and solutions.

In the first essay, “Process Tracing and Historical Explanation,” James Mahoney offers a detailed review of three requirements for good process tracing.[3] Researchers should have in-depth historical knowledge of the case, a solid understanding of preexisting theories relevant to the study, and a strong ability to formulate a logical and coherent argument that integrates case-specific facts with more general knowledge. He shows how process tracing can be used in theory testing, which involves testing the validity of a given hypothesis via “hoop” and “smoking gun” tests,[4] as well as theory development, which identifies potential causal factors via counterfactual analysis or inductive discovery based on the existing literature. When done in a transparent manner, process tracing strengthens historical explanation by parsing the sequential flow of events and illuminating the causal conditions and mechanisms.

While Mahoney’s essay is pitched at a broad level and to audiences across subfields in political science, Nina Tannenwald’s contribution, “Process Tracing and Security Studies,” highlights the unique challenges of applying process tracing to the realm of security studies within international relations, which have used more qualitative work more regularly than their subfield counterpart, international political economy.[5] Tannenwald reminds readers that process tracing has been used in a variety of forms to test many kinds of arguments in the study of international security; yet, she offers a more nuanced appreciation for what good process tracing can and cannot do. Zooming in on the famous debate over the end of the Cold War between Matthew Evangelista’s ideational approach[6] and Stephen Brooks and William Wohlforth’s realist one[7], Tannenwald concludes that good process tracing on the theory of each of the contending approaches did not allow either side to win the debate. This impasse arose because the contending authors disagreed as to what counted as a test of their explanations, and thus what the findings of the process tracing should even be. Tannenwald argues that in order to advance debates of that sort, contending authors should try to “agree in advance to instances where their explanations suggest mutually exclusive evidence”[8] and thus focus the process tracing on precisely the evidence where the rival explanations make divergent predictions. Tannenwald’s essay is thus very helpful in illuminating why establishing clear standards of process tracing in advance can sharpen explanatory debates in the field of security studies, although it is still somewhat unclear how, practically, we can get scholars to agree on what to disagree on in advance As a first step, it might useful if as a matter of routine, scholars specify in advance the leading alternative explanations and the kinds of evidence that would support (or not) their own explanation as opposed to the alternatives.

Andrew Bennett’s essay, “Using Process Tracing to Improve Policy Making: The (Negative) Case of the 2003 Intervention in Iraq,” provides a very insightful and less familiar form of process tracing.[9] It attempts to apply process tracing to the study of the effectiveness of policy choice. This is quite an innovative way of thinking about the potential utility of process tracing to advance the quality of policy assessments by policy makers and intelligence analysts. Bennett provides a good template for conducting such an analysis, while pointing out the likely hurdles of applying process tracing to the realm of policy analysis. Ultimately, like Bennett, I remain skeptical about the ability of policymakers or intelligence analysts, in practice, to follow or rely on the logic of Bayesian updating in a manner advocated by Bennett. The case of the 2003 U.S. military intervention in Iraq highlights many of the reasons why politicians are poor Bayesian updaters. This is not only because of the familiar psychological impediments, but also because there are good political reasons why they are rarely going to agree – especially for issues of high politics – on the likelihood of true positives and false positives, leaving them unable to converge over time. Intelligence analysts, however, might be more receptive and better students of such tools, although even they are not immune to some of the motivated and unmotivated biases that decision-makers face. Their ability to pursue Bayesian logic in process tracing can be further complicated by politicization pressures.

Finally, David Waldner’s essay provides a critical review of the previous three articles, proposing an alternative “completeness standard” procedure that improves researchers’ ability to make unit-level causal inferences (or within-case analysis) with process tracing.[10] He aims to combine both case-specific explanations and average treatment effects. First, he proposes that process tracers should draw a causal graph representing all the causal steps and relations (average treatment effects) between the independent and dependent variable. Next, process tracers should draw an event-history map for each specific case study, representing unit-level causal effects. Third, in the descriptive inference step, researchers should evaluate how closely the causal graph and event-history map correspond, applying the concepts of construct validity, measurement reliability, and measurement validity. Finally, in the causal inference step, process tracers should identify causal mechanisms for each step in the causal graph, instantiated by actual events. Rather than simply following best practices, Waldner argues that a systematic, formalized procedure will help to articulate clear and explicit standards for process tracing and improve the validity of causal inferences drawn from historical case studies. Moreover, he favors a deductive rather than inductive approach. Waldner makes a valuable contribution to increasing the analytical rigor of process tracing, but the high standards of completeness for a theory may not always be feasible, as not every step in an explanation may fully determine the next due to exogenous variables or a lack of empirical evidence. In addition, his approach requires the relevant theories to have been adequately developed in that area in order to conduct deductive process tracing.

Reflecting on this symposium, it is apparent that the field of security studies, as well as of qualitative political science more broadly, is still far from reaching a consensus on best practices regarding the implementation of process tracing. We all agree that transparency is an important component of good process tracing; that scholars should have clear theoretical and observable predictions that they can test with process tracing; and that process tracing is but one tool – among others – that scholars can use to build and/or to test theories. Outside of these main points, scholars are largely at a disagreement about fundamental issues. Some of these disagreements, perhaps unsurprisingly, flow from differences in underlying philosophical and epistemological positions, while others stem from more simple matters of stylistic taste.

Nevertheless, I will outline here three major tensions that we should acknowledge. The first is between the utility of inductive versus purely deductive forms of process tracing. In this symposium, the writings by Tannenwald, Mahoney, and Bennett allow for a more inductive form of process tracing in that scholars are encouraged to use process tracing to develop or further refine the theory. Mahoney views one of the utilities of process tracing as its ability to make inductive discoveries based on good historical knowledge or existing theories. These inductive discoveries can then be used to generate hypotheses to be tested on additional cases or different set of data from the same case. Yet Waldner urges scholars to apply a strictly deductive form of process tracing that needs to be evaluated against a “completeness standard” that he proposes. These approaches are quite different when it comes to assessing the role of process tracing in relation to theory. According to Waldner, scholars should use process tracing only to test already well defined theories where the logical chain of all the causal mechanisms are fully complete and formalized. In attempting to come to an agreed-upon standard, scholars should further wrestle with the conditions under which scholars should apply inductive or deductive forms of process tracing, and better highlight the trade-offs between the two. Crucial to this decision, however, is the scholar’s awareness to his or her research objective, which should subsequently shape what type of process-tracing procedures should be more or less relevant or advantaged to advance his theory.

Second, readers of this symposium could detect a certain tension between a focus on theory versus methodology. Simply put, scholars who take process tracing seriously either to construct or to test a hypothesis should not neglect other components that make a theory a good or an important one. The fact that the different casual parts of the theory can be nicely process traced does not necessarily indicate that the theory sheds light on an important question or that it is superior to other theories attempting to explain the same phenomena. The focus in this symposium on good process tracing thus should not be taken as a signal to scholars to focus on analytical procedures at the expense of developing important, generalizable theories that offer new insights and generate new questions. Alternatively, an argument can be made that process tracing that can yield insights that either confirm or disconfirm existing IR theories should be valuable as well. Thus, the relationship between the value of process tracing to test new theories compared to existing ones is something that the field of qualitative work in IR, at least, has not come to terms with just yet. The piece by Tannenwald, for example, highlights some of the difficulties in doing the latter, and offers some useful suggestions on how to do it right.

Third, the authors in this symposium differ on the extent and manner by which process tracing should be formalized. Tannenwald, for instance, cautions against excessive formalization of the process-tracing procedure, fearing that such an approach would overwhelm the narrative of the case and with it its appeal. On the other side of the spectrum, Waldner’s approach calls for almost an extreme formalization of process tracing that could push scholars to write their case analysis in a manner that would be difficult for readers to follow, and for scholars from close disciplines -- such as history, sociology, or anthropology -- to appreciate and engage. Future discussions on best practices in the study of process tracing should further wrestle with how scholars should approach this issue, and provide useful advice or templates about how to maintain rigor without abandoning richness in the writing of the case.

Finally, I believe there is another tension, not discussed in this symposium, but likely to arise as soon as scholars attempt to implement some of these practices in their research. This involves the trade-off between transparency and practicality. As suggested above, scholars agree that good process tracing should be executed in a detailed, structured, and transparent manner. Yet, the bar set by some of the contributors to this symposium would undoubtedly require scholars to spend a large portion of an article outlining the procedure, explaining its rationale and utility, while also applying it to several case studies. Given that most journals today have a strict limit of something between ten and fifteen thousand words for articles, a practical problem arises: how can scholars fit it all in? The need for a greater reliance on an online appendix in qualitative work has already been voiced by scholars calling for more transparency in qualitative methods in general. In the specific context of process tracing, however, it is unclear from the symposium what portion of the process-tracing procedure should remain in the main text and what should be relegated into an online appendix. There is obviously an issue of personal style, but I believe that an agreement on best practices for such issues will facilitate the widespread application of these procedures by scholars.

Overall, this symposium provides many useful insights about the use and implementation of process tracing. Undoubtedly, this symposium will be widely read by scholars seeking to gain a better understanding of how to conduct good process tracing as well as the challenges involved. In improving the standards of rigor and transparency in process tracing, we must be mindful that it is a tool that should be kept flexible for a variety of approaches and applications in developing broader social science theory as well as explaining specific cases. Thus, the optimal design and standards of process tracing may vary according to the nature of the research question and its goals.

Keren Yarhi-Milo is an Assistant Professor of Politics and International Affairs at Princeton University’s Department of Politics and the Woodrow Wilson School of International and Public Affairs. Her work has been published in numerous academic journals including International Security, International Organization, International Studies Quarterly and Security Studies. Her book, Knowing The Adversary: Leaders, Intelligence Organizations and Assessment of Intentions in International Relations (Princeton University Press, 2014) is based on her Ph.D. dissertation, which won the 2010 Kenneth Waltz Award from the American Political Studies Association. She is currently working on a new book project that examines variation in leaders’ concerns for international reputation in international politics. Yarhi-Milo holds a Ph.D. and a Master's degree from the University of Pennsylvania, and a B.A., summa cum laude, in Political Science from Columbia University.

Copyright ©2016 The Authors.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 United States License