Over all, the results indicate that people do a pretty good job of predicting their own turnout. Yes, most of the people who say they’re “almost certain to vote” wind up voting. But not all. And yes, most of the people who say they’re “not very likely to vote” don’t turn out, but some do.

As a result, we don’t draw a hard line between “likely” and “unlikely” voters. Everyone, in our view, has some probability of voting.

One interesting thing that complicates this analysis is that poll respondents are much likelier to vote than nonrespondents, even after controlling for everything else. In other words, if we thought you had a 10 percent chance of voting before you picked up the phone, we instantly think you have a 35 percent chance of voting the moment you agree to take a political survey. That decision alone reveals that you’re really likelier to vote than your demographics and history of voting would suggest. After accounting for this phenomenon, we wind up giving a smaller bonus to unlikely voters who say they’re going to vote, simply because we don’t treat them as if they were all that unlikely to vote in the first place.

Our analysis of validated turnout in prior Upshot/Siena polls indicates that our models do a much better job of predicting turnout than self-reported intention. If we think you will vote, and you say you won’t, you’re probably still going to vote. But there is still some value in this data: If we weren’t sure you were going to vote, we do care quite a bit about what you told us.

It is worth noting, however, that just because this was true in our prior polls doesn’t mean it will be true in the future. Take the August special election in Ohio 12. We ran a test there in late July to work out the kinks on the infrastructure for this project. We didn’t get enough respondents to complete a full poll, but for illustrative purposes, consider the way this would have all played out:

Almost certain/already voted: Even

Self-report only Balderson+1

Result: Balderson+1

What our estimate would have been Balderson+4

Midterm model only Balderson+6

Maybe it should be no surprise that self-reported turnout fared a little better in a special election than in regularly scheduled elections, where it’s presumably easier to model turnout using historical data.

Or maybe it’s a harbinger of what’s going to happen this November, and our turnout models won’t be as useful as they’ve been in the past.

Either way, we’ll show you all of these possibilities on our page. No matter what happens, we hope this is a transparent look into why polling works and why sometimes it does not.