But remember: It’s just one poll, and we talked to only 501 people. Each candidate’s total could easily be five points different if we polled everyone in the district. And having a small sample is only one possible source of error.

For outside observers, this is the “Bigfoot” election. We made 21849 calls, and 501 people spoke to us.

This survey was conducted by The New York Times Upshot and Siena College.

Hey, I’m Alex Burns, a politics correspondent for The Times. I’ll give you the latest reporting and intel on the midterms and take your questions from the campaign trail.

It’s generally best to look at a single poll in the context of other polls:

Ms. Cockburn enjoys a lopsided fund-raising advantage, having raised about $2.4 million to Mr. Riggleman’s $910,000 in the most recent reporting period .

Mr. Riggleman has acknowledged an interest in Bigfoot — he is co-author of a book on the subject — but rejected the erotica part.

Ms. Cockburn has tried to tie her opponent to elements of the far right, including Corey Stewart, the Republican nominee for United States Senate in Virginia. And in one bizarre campaign moment, she accused Mr. Riggleman of being a fan of “ Bigfoot erotica .”

This Republican-leaning, mostly rural district stretches from the Washington exurbs to the North Carolina border, and the unexpected retirement of the Republican incumbent leaves the seat open. It’s the home of Charlottesville, the college town that found itself in an unwanted spotlight last year after a right-wing rally turned violent.

Each dot shows one of the 21849 calls we made.

If sampling error were the only type of error in a poll, we would expect candidates who trail by one point in a poll of 501 people to win about two out of every five races. But this probably understates the total error by a factor of two .

One reason we’re doing these surveys live is so you can see the uncertainty for yourself.

As we reach more people, our poll will become more stable and the margin of sampling error will shrink. The changes in the timeline below reflect that sampling error, not real changes in the race.

Our turnout model There’s a big question on top of the standard margin of error in a poll: Who is going to vote? It’s a particularly challenging question this year, since special elections have shown Democrats voting in large numbers. To estimate the likely electorate, we combine what people say about how likely they are to vote with information about how often they have voted in the past. In previous races, this approach has been more accurate than simply taking people at their word. But there are many other ways to do it. Assumptions about who is going to vote may be particularly important in this race. Our poll under different turnout scenarios Who will vote? Est. turnout Our poll result The types of people who voted in 2014 215k Riggleman +4 Our estimate 247k Cockburn +1 People whose voting history suggests they will vote, regardless of what they say 247k Even People who say they will vote, adjusted for past levels of truthfulness 269k Cockburn +2 People who say they are almost certain to vote, and no one else 273k Cockburn +14 The types of people who voted in 2016 343k Even Every active registered voter 465k Cockburn +2

The types of people we reached Even if we got turnout exactly right, the margin of error wouldn’t capture all of the error in a poll. The simplest version assumes we have a perfect random sample of the voting population. We do not. People who respond to surveys are almost always too old, too white, too educated and too politically engaged to accurately represent everyone. How successful we were in reaching different kinds of voters Called Inter-

viewed Success

rate Our

respon­ses Goal 18 to 29 3 3 8 6 3 2 1 in 106 6% 9% 30 to 64 1 3 6 2 7 2 7 6 1 in 49 55% 55% 65 and older 4 8 2 9 1 9 3 1 in 25 39% 36% Male 9 6 8 5 2 1 5 1 in 45 43% 46% Female 1 2 1 6 4 2 8 6 1 in 43 57% 54% White 1 6 3 3 6 3 8 0 1 in 43 76% 76% Nonwhite 4 1 4 8 8 6 1 in 48 17% 18% Cell 1 1 2 1 8 2 1 9 1 in 51 44% — Landline 1 0 6 3 1 2 8 2 1 in 38 56% — Pollsters compensate by giving more weight to respondents from under-represented groups. Here, we’re weighting by age, primary vote, gender, likelihood of voting, race, education and region, mainly using data from voting records files compiled by L2, a nonpartisan voter file vendor. But weighting works only if you weight by the right categories and you know what the composition of the electorate will be. In 2016, many pollsters didn’t weight by education and overestimated Hillary Clinton’s standing as a result. Here are other common ways to weight a poll: Our poll under different weighting schemes Our poll result Don’t weight by education, like many polls in 2016 Cockburn +3 Our estimate Cockburn +1 Don’t weight by primary vote, like most public polls Even Weight using census data instead of voting records, like most public polls Even