But remember: It’s just one poll, and we talked to only 500 people. Each candidate’s total could easily be five points different if we polled everyone in the district. And having a small sample is only one possible source of error.

This survey was conducted by The New York Times Upshot and Siena College.

Hey, I’m Alex Burns, a politics correspondent for The Times. I’ll give you the latest reporting and intel on the midterms and take your questions from the campaign trail.

It’s generally best to look at a single poll in the context of other polls:

Ms. Manning is the first woman to head the Jewish Federations of North America. A first-time candidate at age 61, she was inspired to run after witnessing Congress try to repeal protections for pre-existing conditions. She has outraised her opponent .

Ms. Manning has donated money to Nancy Pelosi in the past but recently said she wouldn’t support her for speaker. “We are at a crisis point in our country, and both parties are to blame,” she said.

A recent poll found health care was the top issue in the district. Mr. Budd voted to repeal the Affordable Care Act, and he supported the G.O.P. tax overhaul.

Mr. Budd, considered to be a member of the conservative House Freedom Caucus, owns a gun store and a shooting range. Ms. Manning is a chief fund-raiser for a new Greensboro performing arts center.

If sampling error were the only type of error in a poll, we would expect candidates who trail by six points in a poll of 500 people to win about one out of every 10 races. But this probably understates the total error by a factor of two .

One reason we’re doing these surveys live is so you can see the uncertainty for yourself.

As we reach more people, our poll will become more stable and the margin of sampling error will shrink. The changes in the timeline below reflect that sampling error, not real changes in the race.

Our turnout model There’s a big question on top of the standard margin of error in a poll: Who is going to vote? It’s a particularly challenging question this year, since special elections have shown Democrats voting in large numbers. To estimate the likely electorate, we combine what people say about how likely they are to vote with information about how often they have voted in the past. In previous races, this approach has been more accurate than simply taking people at their word. But there are many other ways to do it. Our poll under different turnout scenarios Who will vote? Est. turnout Our poll result Our estimate 217k Budd +6 People whose voting history suggests they will vote, regardless of what they say 220k Budd +4 The types of people who voted in 2014 230k Budd +7 People who say they are almost certain to vote, and no one else 233k Budd +13 People who say they will vote, adjusted for past levels of truthfulness 233k Budd +5 The types of people who voted in 2016 324k Budd +8 Every active registered voter 446k Budd +4 Just because one candidate leads in all of these different turnout scenarios doesn’t mean much by itself. They don’t represent the full range of possible turnout scenarios, let alone the full range of possible election results.

The types of people we reached Even if we got turnout exactly right, the margin of error wouldn’t capture all of the error in a poll. The simplest version assumes we have a perfect random sample of the voting population. We do not. People who respond to surveys are almost always too old, too white, too educated and too politically engaged to accurately represent everyone. How successful we were in reaching different kinds of voters Called Inter-

viewed Success

rate Our

respon­ses Goal 18 to 29 1 4 7 1 2 1 1 in 70 4% 8% 30 to 64 1 3 2 6 6 2 9 9 1 in 44 60% 59% 65 and older 5 1 0 6 1 8 0 1 in 28 36% 34% Male 8 4 3 4 2 2 0 1 in 38 44% 45% Female 1 1 4 0 9 2 8 0 1 in 41 56% 55% White 1 4 0 1 3 3 5 9 1 in 39 72% 71% Nonwhite 5 2 5 9 1 2 6 1 in 42 25% 26% Cell 1 3 6 8 4 2 9 0 1 in 47 58% — Landline 6 1 5 9 2 1 0 1 in 29 42% — Pollsters compensate by giving more weight to respondents from under-represented groups. Here, we’re weighting by age, party registration, gender, likelihood of voting, race, education and region, mainly using data from voting records files compiled by L2, a nonpartisan voter file vendor. But weighting works only if you weight by the right categories and you know what the composition of the electorate will be. In 2016, many pollsters didn’t weight by education and overestimated Hillary Clinton’s standing as a result. Here are other common ways to weight a poll: Our poll under different weighting schemes Our poll result Don’t weight by education, like many polls in 2016 Budd +3 Weight using census data instead of voting records, like most public polls Budd +5 Don’t weight by party registration, like most public polls Budd +5 Our estimate Budd +6 Just because one candidate leads in all of these different weighting scenarios doesn’t mean much by itself. They don’t represent the full range of possible weighting scenarios, let alone the full range of possible election results.