(CNN) Before the 2018 election, I said the best way to push back against those who call polls "fake news" was for the polls to predict the election results.

It turns out public polling passed this test with flying colors. Nonpartisan polls taken within three weeks of the 2018 election were far more accurate t han the average poll since 1998.

The biggest test of 2018 was the battle for control of the House. There are two different ways to poll House races.

The most familiar is the generic congressional ballot, which usually asks voters to say whether they will vote for the Democratic or Republican candidates in their districts. From 1998 to 2016, the difference between the average generic ballot poll and the House national vote was 3.8 points. This year (with some votes still to be counted) it looks like it's going to be only 2.8 points, which is a full point closer to the result than the average.

The other way to poll House races is by individual district. Those polls fared just as well. Nonpartisan House polls have historically missed the mark by an average of 5.9 points. This year it was just 4.9 points. Again, that means the average district poll was a full point closer to the result than usual.

That increase in accuracy was driven in large part by the Siena College/New York Times polls , whose surveys made up the bulk of district level polling and had an average absolute error of just about 3 points. That's nearly 3 points better than average, which is off the charts good.

Statewide polling also had a strong year, although it should be noted that Senate and governors' polling did pick fewer winners than usual.

The average poll in the Senate was off by only 4.2 points. The average Senate poll historically has been off by 5.2 points, which means this year's polls were a point better than average. Likewise, the average governor's poll had an error rate of 4.4 points. That's 0.7 point more accurate than the average governor's poll since 1998.

But there were also a number of cases where the polling had one candidate winning but the other candidate won the election. This year nonpartisan Senate polls called the correct winner 77% of the time, compared with 84% historically. Governor polls saw a nearly identical 76% of races called correctly, compared with 84% historically.

Why the difference? Much of it has to do with a lot of races forecast to be close this year. Over 20% of pre-election polls for the Senate and governor's races had margins of 2 points or less. Historically that's under 20%.

You can see that when you look at the averages in some of the individual states. The average poll in the Florida governor's race was off by 4.4 points, which isn't any larger than the average for gubernatorial polls this year. It just so happened that the race was tight and the polling error was enough to tilt the winner from Democrat Andrew Gillum in the polling to Republican Ron DeSantis in the results. The same thing occurred in the Senate race, where the average poll was better than average (just 3 points off), but the race was close enough for a polling error to flip it from Democrat Bill Nelson to Republican Rick Scott.

The fact that there were so many close Senate races was the big reason why my final Senate forecast had Republicans winning more races than just in states where they led in the polling average. History suggested Democrats wouldn't hold all of their small leads.

This wasn't a big problem on the House side, where the leader in 83% of district polling ended up winning the race. That's better than the 80% historical average.

Now, you might be wondering if the polling misses this year were more likely to benefit Democrats or Republicans. Obviously, even if polling errors were smaller than average, it would be bad if one side benefited from them.

There were some states where the polls were biased. The polls were once again too favorable to Democratic candidates in Midwestern states like Indiana, Michigan and Ohio, though not in Pennsylvania or Wisconsin. And like in 2016, the polls were too favorable to Democrats in Florida and Republicans in Nevada. Repeated patterns of bias in the polling is somewhat worrying.

Unlike 2016, this year the errors tended to cancel each other out, as one would hope they would.

The average governor and Senate polls were about a point more favorable to the Democrats than the result. The average generic congressional ballot and House district polls were less than a point more favorable to Republicans than the actual result.

Overall, the average poll was about 0.5 point more favorable to Democrats than the final result. In only one other cycle since 1998 did the polling have less of a bias toward either the Democrats or Republicans. In other words, 2018 polls were solid and without bias compared with the average cycle.

I should point out that the accuracy of polling this year does not mean we're undergoing some sort of polling renaissance, where the polls will be awesome from now on. In 2020, the polls may be more inaccurate than they were this year.

The predictiveness of polling in 2018, though, does reaffirm that polling isn't becoming more inaccurate. As long as we recognize that polls are tools and aren't going to be perfect, they remain the best way outside of an election to understand how the public is feeling about the country.