Why we fail to prevent civil wars: A forecaster’s perspective

Hannes Mueller, Christopher Rauh

Effective forecasting of conflict risk could help prevent civil wars. But resource constraints mean that policymakers rarely act until conflict begins because they fear the number of false positive warnings. This column argues that the policy of reacting to violence instead of preventing it cannot be justified, given the accuracy of simple forecasting models such as news analysis.

Research on violent conflict is booming. Since Blattman and Miguel (2010) published an overview on the topic, there have been more than 40 articles on this issue published in the top five economics journals alone. At the same time, it does not appear as if governments and multilateral agencies have become better at preventing violent conflict.

In this column we will try to contribute to the explanation of this apparent failure from a forecasting perspective. We organise recent forecasting research and illustrate how it could help to prioritise resources and attention in policymaking. We will then use this perspective to give one explanation for the current situation.1

Research and policy from a forecasting perspective

It is useful to distinguish four phases of conflict forecasting, because they mean something different both for prediction and prevention.

In the first phase conflict risk is latent. At this point, the analysis of long-term risks to instability is most important. In the second phase a specific conflict is materialising. It is more important to understand short-term changes in risk on a yearly or monthly basis. The third phase is conflict. The fourth phase is post-conflict. It is similar to the second phase before conflict (most conflicts re-occur) but requires a different set of policies.

Many studies in economics have tried to understand long-term risks. Long-term risks help us to distinguish between countries and regions with high and low risk of conflict. Ethnic composition is one example (e.g. Michalopoulos and Papaioannou 2016). Here, an area of high risk remains an area of high risk. From a policy perspective this is obviously not an attractive prospect. Analysis of mechanisms in this area, however, can still help address these risks.

Research has also analysed moderating factors like political institutions (Besley and Persson 2011) or inter-ethnic trade and trust (Rohner et al. 2013) which offer both time variation and specific channels for intervention. There is also an increasing body of work that analyses sudden shocks like changes in natural resources prices, or climatic shocks. In other words, the economics literature now offers specific models to identify risks and causal mechanisms. The promise is to understand not only the long-term risks, but also the short-term variation of risks. This would potentially be useful in the second and fourth phase of conflict.

Unfortunately, existing empirical models have been developed to identify specific channels. As a result, the variables proposed do not have much forecasting power (Mueller and Rauh 2016). Forecasting models built with these variables would tend to indicate whether conflict is more likely in general, rather than when any specific country might be at the brink of conflict. This implies that there is still a gap in our knowledge of how to analyse risk in the years before and after armed conflict.

Using news for short-term forecasting

Using news sources has proven to be a promising method to analyse short-term risks (Ward et al. 2013) and Chadefaux 2014). They capture expert knowledge and generate a lot of time variation. In our recent work, we propose the use of a topic model based on nearly 700,000 articles to forecast conflict (Mueller and Rauh 2016). In this model, topics are a probability distribution over words and are formed automatically by a machine-learning algorithm. Figure 1 shows the top expressions in one of the resulting topics. It is clearly related to conflict. The advantage of collections of words like this for forecasting is that they provide both width (many different topics can be traced at the same time) and depth (forecasting is based on a large collection of words). We show that the appearance and disappearance of topics at the country level can be effectively used to identify high-risk years before and after conflict.

Figure 1 Most prominent words in example topic

How forecasting can, and cannot, help for short-term prevention

A forecaster needs to decide at which level of risk to warn of a conflict. To use the terms from forecasting research, there are true positives, false positives, true negatives and false negatives.

A true positive is when a situation with a conflict warning escalates into conflict, and a false positive is a warning that has not materialised.

Similarly, a false negative is a conflict that the model did not identify, whereas a true negative is correctly forecasted peace.

Policymakers want to avoid false negatives and false positives. This generates a trade-off when selecting a cutoff to trigger a warning. If the cutoff is low, the model would warn of conflict everywhere, every year. This never produces a false negative, but a lot of false positives. A high cutoff, on the other hand, never produces a false positive but lots of false negatives. For a continuum of cutoffs this trade-off can be visualised in a receiver operating characteristic (ROC) curve.

Take the two ROC curves in Figure 2. These curves show the trade-off between the false positive rate – the share of warnings that did not materialise – and the true positive rate – the share of warnings that did. A perfect forecast would generate a true positive rate of one, at a false positive rate of zero.

The blue solid line in Figure 2 comes from an out-of-sample forecasting model in which our news topics and a dummy for armed conflict (defined as at least 25 battle deaths) are used to forecast the outbreak of a civil war (defined as at least 1,000 battle deaths) one year before it occurs. As far as forecasts out-of-sample go, this is a pretty good one. The curve bends up sharply and reaches a true positive rate of almost 90% at a false positive rate of only 20%.

Figure 2 ROC curves of conflict onset and incidence prediction

Resources and especially attention are scarce in government and multilateral agencies, so the appetite for false positives is extremely limited. This is a problem because civil war onset is rare. It happens in roughly 2% of all country/years. Reaching a true positive rate of 90% on the blue curve would therefore require raising about 12 warnings for one true positive. To reach a higher ratio of true positives to false positives, we would need to reduce warnings dramatically, and go to the much steeper part of the ROC curve. Fewer warnings will prevent fewer conflicts. Understanding this trade-off is key to understanding why prevention often fails.

Now look at the red dashed line in Figure 2. This line shows a forecasting model in which civil war incidence is forecast by civil war incidence. In other words, this model does not focus on the outbreak of civil war, but instead uses the logic that a year of civil war is typically followed by civil war. This is, at least from the outside, a crude characterisation of current practice. For low false positive rates this crude model reaches an extremely high true positive rate – the likelihood that ongoing violence will be predicted is very high. This might be a reason for why we fail to do actual prevention. It is tempting to put less effort into early conflict phases because their risk might not be realised. We give attention to a country once violence has already started.

This may be cynical but, given the current state of the art, is it also rational? Figure 2 tells us that a ratio of five interventions to one prevention can be improved to three interventions per prevention by waiting until a civil war starts. This is not a difference in order of magnitude, which could justify such a late intervention. Prevention becomes much harder once conflict is ongoing. Of course, there could be other political reasons for postponing prevention, but we hope that the temptation to react to violence instead of preventing it could be countered by a more transparent discussion.

References

Besley, T and T Persson (2011), “The Logic of Political Violence”, The Quarterly Journal of Economics, 126 (3).

Blattman, C and E Miguel (2010), “Civil War”, Journal of Economic Literature, 48 (1).

Chadefaux, T (2014), “Early Warning Signals for War in the News”, Journal of Peace Research, 51(1): 5–18.

Michalopoulos, S and E Papaioannou (2016), “The Long-Run Effects of the Scramble for Africa”, American Economic Review, 106 (7).

Mueller, H and C Rauh (2016), “Reading Between the Lines: Prediction of Political Violence Using Newspaper Text”, CEPR Discussion Paper No. 11516.

Rohner, D, M Thoenig and F Zilibotti (2013), “Seeds of Distrust: Conflict in Uganda”, Journal of Economic Growth, 18.

Ward, M D, N W Metternich, C L Dorff, M Gallop, F M Hollenbach, A Schultz, and S Weschle (2013), “Learning from the past and stepping into the future: Toward a new generation of conflict prediction”, International Studies Review, 15(4): 473–490.

Endnote

[1] We make the assumptions that policymakers care about conflict (to us this seems evident), and know how to deal with risk (this is less evident).