Pippa Norris

Harvard and Sydney Universities

Founding Director, the Electoral Integrity Project

7 Jan 2017

Reprinted from the LSE USAPPBlog http://blogs.lse.ac.uk/usappblog/2017/01/08/its-even-worse-than-the-news-about-north-carolina-american-elections-rank-last-among-all-western-democracies/

On 22nd December, an op ed published in the Raleigh News and Observer by Andrew Reynolds, Professor of Political Science at the University of North Carolina, Chapel Hill, went viral. The article commented on elections and politics in North Carolina, with the headline suggesting “North Carolina is no longer classified as a democracy“.

There is no independent and reliable measure of how U.S. states rank in terms of liberal democracy. But, moving beyond hearsay and anecdote, new evidence is available assessing the quality of elections. For this, Professor Reynolds drew on data from the Electoral Integrity Project (EIP), an independent academic project based at Harvard and Sydney Universities, which I direct.

The project has conducted expert survey of Perceptions of Electoral Integrity for the last five years to evaluate the quality of parliamentary and presidential elections around the world, including the 2012 and 2014 US elections.

This technique is commonly used for evaluating performance in the absence of directly observable indicators. It is similar to the methods employed for the Perception of Corruption Index by Transparency international. The empirical evidence is gathered from rolling expert surveys gauging Perceptions of Electoral Integrity (PEI) globally (across 213 elections and 153 countries worldwide since 2012), and across US states (in 2014 and 2016). ‘Electoral integrity’ refers to international standards and global norms governing the appropriate conduct of elections, during the pre-election stage, the campaign, polling day and the election aftermath.

The EIP evidence indicated that compared with the performance of other U.S. states, in 2016 experts assessed North Carolina’s elections particularly poorly on district boundaries, the legal framework and voter registration processes.

Moreover problems are not just confined to one state nor the 2016 election; as I published in a blog in the Monkey Cage in 29 March 2016, well before all the hullabaloo, the U.S. ranks 52nd out of 153 countries worldwide in the 2016 Perceptions of Electoral Integrity index, displaying the worst performance among equivalent Western democracies. As I said at the time: “By contrast, elections in many newer democracies are seen by experts to perform far better in the global comparison, such as in Lithuania (ranked 4th), Costa Rica (6th), and Slovenia (8th).”

The critique of THE rankings

The op ed by Professor Reynolds triggered dozens of news reprints, thousands of tweets, and a hail-storm of debate, some deploring politics in the Tar Heel state, others challenging the evidence for the claims. Some questioned the PEI global ratings based on so-called ‘sniff’ tests, a fancy way of saying that several cases in the dataset did not reflect the prior assumptions of the readers.

Now no scientific data is ever perfect. Methods can always be improved. Skepticism is the nature of science. We believe that the PEI has learnt and continuously improved its methodology as it has developed over time – and we are committed to do so further. Several concrete example can be given:

Are cases comparable? Following Gary King’s suggested methods, EIP includes ‘anchoring vignettes’ designed to improve our understanding of the benchmarks used when expert make their evaluations of complex issues in diverse contexts.

Following Gary King’s suggested methods, EIP includes ‘anchoring vignettes’ designed to improve our understanding of the benchmarks used when expert make their evaluations of complex issues in diverse contexts. Can we improve the methodology of expert surveys? Of course. To share awareness of best practices, methods, and standards in the production of expert surveys, and to learn how best to improve our work, last year EIP organized two international workshops in conjunction with IPSA in Poznan and APSA in Philadelphia, engaging leading scholars and practitioners from organizations such as V-Dem, Polity, Freedom House, the Carter Center, IFES, and UNDP.

Of course. To share awareness of best practices, methods, and standards in the production of expert surveys, and to learn how best to improve our work, last year EIP organized two international workshops in conjunction with IPSA in Poznan and APSA in Philadelphia, engaging leading scholars and practitioners from organizations such as V-Dem, Polity, Freedom House, the Carter Center, IFES, and UNDP. Are issues corrected? As with any dataset, any issues are corrected in the bi-annual releases. Two cases included in PEI 3.0 (March 2015) – North Korea and Trinidad and Tobago – were dropped five months later in the subsequent dataset release because we had doubts about the technical responses in these two cases, as noted transparently in successive EIP publications.

As with any dataset, any issues are corrected in the bi-annual releases. Two cases included in PEI 3.0 (March 2015) – North Korea and Trinidad and Tobago – were dropped five months later in the subsequent dataset release because we had doubts about the technical responses in these two cases, as noted transparently in successive EIP publications. Can we compare US state and cross-national data? Media commentary has focused obsessively upon the exact ranking of particular American states vis-a-vis other countries. But the Electoral Integrity Project publications have not made these comparisons. There are two separate datasets. The long series of EIP publications have compared nation-states with each other in the global survey (PEI-4.5). The project has also compared American states with each other in the sub-national survey (PEI-US-2016). But these are separate enterprises. Whether it is appropriate to make such comparisons remains an open question.

Media commentary has focused obsessively upon the exact ranking of particular American states vis-a-vis other countries. But the Electoral Integrity Project publications have not made these comparisons. There are two separate datasets. The long series of EIP publications have compared nation-states with each other in the global survey (PEI-4.5). The project has also compared American states with each other in the sub-national survey (PEI-US-2016). But these are separate enterprises. Whether it is appropriate to make such comparisons remains an open question. What do we measure? Moreover, EIP makes no claims about rating democracy – EIP measures electoral integrity, which is far from equivalent. Liberal democracies require effective elections – but also many other institutions which facilitate competition and participation. Similarly, electoral integrity requires that states have the capacity for effective governance, to prevent unintentional problems arising from maladministration and human errors, as well as democratic principles safeguarding basic human rights and preventing the abuse of power.

Moreover, EIP makes no claims about rating democracy – EIP measures electoral integrity, which is far from equivalent. Liberal democracies require effective elections – but also many other institutions which facilitate competition and participation. Similarly, electoral integrity requires that states have the capacity for effective governance, to prevent unintentional problems arising from maladministration and human errors, as well as democratic principles safeguarding basic human rights and preventing the abuse of power. Are EIP's methods transparent? The project has published the datasets, codebooks, and public reports for each successive release, and archived the older versions, so that the results are open to public scrutiny. We encourage analysts to use the datasets and to provide feedback in order to improve the process, including at the annual workshops, professional conferences, and outreach talks and events worldwide.

The project has published the datasets, codebooks, and public reports for each successive release, and archived the older versions, so that the results are open to public scrutiny. We encourage analysts to use the datasets and to provide feedback in order to improve the process, including at the annual workshops, professional conferences, and outreach talks and events worldwide. Are EIP's measures definitive? Absolutely not. We stand fully by the methods and evidence. But as with any other social and political indicators of complex phenomenon, the data should obviously be compared with alternative sources of independent evidence, treated as the starting point for any diagnosis, not the ending point.

Therefore social media commentary about problems in elections has too commonly distracted us with shiny baubles, reflecting partisan click-bait, fake news, and simplistic attacks on red herrings, mixing up all these complex issues. Instead of falling for these sorts of distraction, anyone concerned about democracy should engage seriously in a dialogue about the problems which need to be addressed to strengthen elections at home and abroad, an issue at the heart of American values.

In particular, there has been no serious rebuttal to the evidence that US elections are seriously flawed in certain respects. But when there are disputes about the validity of any evidence, rather than employing dubious ‘sniff tests’, social scientists usually seek to compare the results using two or more independent sources of data which measure identical or equivalent concepts. The more the agreement among independent studies, all other things being equal, the more confidence we should have in the evidence. Like pieces of a jig-saw, triangulation is the name of the game.

Comparing independent sources of evidence

So are American elections actually as exceptionally bad as claimed? Here for evidence we can turn to the Variety of Democracies (V-Dem) project based at the University of Gothenburg, an independent academic study involving a team of over 50 social scientists on six continents. The latest version of the dataset (V6.2) covers 350 indicators and 30 democracy indices in 173 countries annually from 1900 to 2012. This project has been widely acknowledged; for example, last year it received one of the most prestigious dataset awards in American political science. What does the V-Dem evidence show about how the quality of American elections compared with contests around the rest of the world?

V-Dem gauges the quality of elections using dozens of indicators such as those concerning voting rights, campaign media, the extent of any problems of opposition boycotts and violence, and the capacity of the electoral officials. For a summary measure, V-Dem asks its expert respondents: “Taking all aspects of the pre-election period, election day, and the post-election process into account, would you consider this national election to be free and fair?” Responses to this item, averaged for the period from 2000 to 2012, can be analyzed in 161 countries.

Now the concepts, instrument, time-period, and methods used by V-Dem and EIP are not identical, by any means. The concept of ‘electoral integrity’ is not the same thing as whether elections are ‘free and fair’. V-Dem seeks retrospective evaluations whereas EIP is contemporaneous. Nevertheless, the concepts are close enough to provide comparisons.