Those who follow my writing know that I am not a fan of weighting polls to a predetermined party identification (ID). The reason is that questions on party ID can be skewed by question ordering and political environment. Many Republicans said, for instance, they were independents after the Watergate scandal. Respondents may also be more likely to call themselves an independent instead of a Republican if they decide to cross party lines at an election.

Some Republicans have been up in arms this year over what they perceive as polls with too many Democrats. The charge is not new. In 2004, many Democrats complained of a bias: they thought the polls had too many Republicans. Why? Pollsters were finding that more voters identified as Republicans than usual. But it turned out that more voters in the exit polls called themselves Republicans than in prior years – and George W Bush did win.

The truth is that while some pollsters are unreliable, most know what they are doing. It's also important to remember that most media companies use bipartisan polling teams. Fox News has both a Democrat and a Republican pollster on their team. They found President Obama up by 5 percentage points. Are they biased?

NBC News/Wall Street Journal has given Obama the advantage in every one of their polls. Did you know that one of their pollsters is Republican Bill McInturff, whose partner is Mitt Romney's pollster Neil Newhouse?

Still, the party identification argument would be more sustainable if it made a big difference across pollsters: that is, if pollsters who had more Democrats than others were more Democratic, while those with more Republicans were more friendly to the GOP. To that effect, an interesting graph was published by Tweeter "Numbers Muncher".

The graph is supposed to show that the more there are survey respondents who identify as Democrats, the bigger the Obama lead. Your eye is probably drawn to the relationship between Rasmussen (at bottom left) and every other pollster. Join the dots, and you would seem to have found a fairly straightforward relationship: more Democratic-identified respondents giving a more pro-Obama poll result.

You can explain about 45% of the difference in Obama's lead and the percentage difference between Democrats and Republicans in the sample. That's a solid, though not overwhelming relationship. It suggests that the number of Democrats in one sample can help predict whether or not the poll is going to be more Democratic than a poll done by another outlet.

The problem is that the relationship is being driven by one datapoint. Take Rasmussen out of the chart, and you get this graph:

All of a sudden, the relationship that seemingly was there, no longer is. The explanatory power of party ID, therefore, is essentially zero. The pollster that has the most favorable Romney numbers gives Democrats a 10-percentage point party ID advantage. If anything, you might try and draw a U-shape to connect the dots.

A truly real statistical relationship should not be dependent on one datapoint. Adding the newly released Monmouth University poll continues to prove a lack of a relationship.

The truth is that if you dig deep enough, you can find many causes for "house effects" (that is, differences between pollsters). This year, Democrats and some analysts have complained about the racial composition of polling samples; I don't buy these racial arguments for the most part either. I only ask for individual pollsters to keep their own data consistent. A pollster shouldn't have a 35% Democratic-identified sample one time and a 25% sample the next, except when there have been changes to the electorate such as increased Democratic enthusiasm after the Democratic National Convention.

Now, none of this is meant to say President Obama will definitely win (more on that later in the week), but he is winning right now. That may be welcome news to some, but hard for others to accept – and I've seen that. Take it from someone who was among the first to call it that the Republicans would take over the House in 2010 and hold it in 2012; not everyone wanted to hear this.

I'm interested only in projecting the winner of this presidential election objectively. If there are those who prefer to cherrypick data to get a preferred outcome, that's their choice. But they're setting themselves up for a fall.