The Design Problems with Ranked Choice Voting

As noted on the Ranked Choice Voting website, voters have some important instructions to follow:

These instructions point out some of the shortcomings of ranked choice voting.

As a survey designer, I never use so-called forced ranking questions.

This question type tries to measure the rank order importance of factors to some outcome. We might ask respondents:

What drove your decision to purchase product X? Please indicate the importance of each of the following 5 items by selecting 1 for the most important, 2 for the second most important, etc.

Please indicate the importance of each of the following 5 items by selecting 1 for the most important, 2 for the second most important, etc. What are the most important issues in this election? Please indicate the order of importance by selecting 1 for the most important, etc.”

Why don’t I use this question type? Forced Ranking (Rank Choice) question type is prone to:

Respondent Annoyance. If the list is long, ranking every item can be tedious, and the difference in preference after the first few is likely trivial. I recently saw a survey that asked to rank order 19 (sic) items!! You can be certain that after the first 5 or so were ranked by respondents, they were then just clicking on buttons to get it done.

If the list is long, ranking every item can be tedious, and the difference in preference after the first few is likely trivial. I recently saw a survey that asked to rank order 19 (sic) items!! You can be certain that after the first 5 or so were ranked by respondents, they were then just clicking on buttons to get it done. Respondent Error. Think about the Florida butterfly ballot from the 2000 election depicted nearby. No matter how clear the instructions, some people would screw it up. Fact is, instructions on surveys or ballots are like safety instructions at the start of an air flight: no one pays attention to them.

Same here. On a paper ballot, some people will vote for two 1st choices or vote for no 1st choices – maybe as a way of saying they don’t really like any candidate. Such ballots would likely be declared invalid for that office election. Even voting for the same candidate for 1st, 2nd, and 3rd choices could invalidate the ballot. Is that fair?

Administrative Dilemma. Given that some ballots will be filled out incorrectly — and many will — what does the election bureau do with those ballots? The State of Maine has an 11-slide powerpoint deck to help educate voters. The last few slides show how the state has decided to handle incorrect ballots. Someone has to decide this, but at some point aren’t we looking for dimpled ballots and counting chads? Would we want an election decided by those “redeemed” ballots?

A friend of mine in Maine, which uses RCV, told me that he screwed up his ballot for the first time in his life.

Electronic voting systems may have safeguards that will prevent these mistakes, but then we increase the annoyance factor. The error messages will lead some to just skip the voting.

I will suggest the annoyance and error issues will be particularly true for older voters whose faculties have been lessened by time. My dad was an electrical engineer with 25 patents to his name. I saw his logic skills deteriorate later in life. He would have had trouble with this system in his later years. Does this system discriminate against such voters?

And remember that the voting method has to be ADA compliant! (Americans with Disability Act)

Rank Choice Usability History

You might think I’m raising a non-issue. I’m not. I have followed electronic survey tools since their inception in the late 1990s. The format for ranking questions constantly has been in flux as designers try to find a way of presenting the question to respondents that avoids error and annoyance. Formats include: a matrix of radio buttons, drop down boxes, drag and drop, among others. SurveyMonkey I believe has changed this question format three times in the past 5 years.

I have tried rank choices in surveys and have had to throw out the question’s data because there was too much error in the responses, making the findings meaningless or, worse, distortive.