It’s been a long year here at RealClearPolitics. Needless to say, readers’ interest in this election is especially piqued, and passions are high. I’m pretty sure that every polling organization this cycle has been the subject of at least one angry email “unskewing” or even dismissing its findings.

But I can say with confidence that no pollster has engendered as much controversy, or at times outright anger, as the Los Angeles Times’ operation, which has tended to show Donald Trump leading Hillary Clinton. This has happened even as other polls show Clinton in the lead, at times by dramatic margins.

It’s a little bit odd, to tell the truth. The Times poll is a single survey, generating a two-way result in a year where the four-way average is probably the most relevant. It doesn’t help that the traditional trackers have basically pulled out of the tracking business. This means that the Times poll always has fresh data, and always has the first data available after every event – sometimes the only data available. It also doesn’t help that it is the only poll showing Trump ahead, which inflames liberals and #NeverTrump conservatives. Let’s just say the people sending angry e-mails and tweets about the poll were not doing so this summer when Reuters/Ipsos showed Clinton up by double digits, even as the RealClearPolitics Average showed Trump in a closer race.

Regardless, you can read a defense of the poll’s methodology here. You should also read this New York Times piece (published as this article was in production), which is a bit overstated in the title, but it does a nice job illustrating how the sometimes-arbitrary decisions of pollsters can have substantial effects on their findings. While you are at it, check out this piece, which makes a similar point.

My point here is not to debate methodology, it is to answer this question: “If their methodology is so different, why do you pay any attention to them?” The answer is twofold. First, truth is not decided by committee. That is to say, the fact that the L.A. Times pollsters weight their poll in a different manner than other pollsters do doesn’t make them wrong. As noted in the above Upshot article, all pollsters have their proprietary ways of weighting data, and the fact that they do so in roughly the same way can at times be a bug rather than a feature.

But second, and more importantly, we’ve heard all of this. The Times poll was actually the RAND poll in 2012, and I was concerned about it. These concerns were straightforward, and echo some of the more thoughtful critiques I receive today. First, by weighting your panel to the previous presidential election, you risk skewing (for lack of a better term) the results, as people forget how they vote over time.

Second, by drawing repeatedly from the same group of respondents, you risk creating what I called “Heisenberg effects.” Think of it this way: If you were to perform a single poll of Americans regarding the “best” European soccer clubs, the results would probably be fairly random. They would likely reflect the most famous cities in Europe, or perhaps a few especially well-known clubs. The findings would probably be representative of America as a whole, which has a relatively low level of knowledge about European soccer.

But over time, if you keep asking people this question, they become curious. “What are the winningest clubs in Europe?” they may ask, and then perform a Google search. Perhaps they begin watching games so that they can answer the question more thoughtfully. The point is, by putting people in the experiment, you risk altering the experiment.

Finally, in 2012, much like 2016, the results looked like a pretty significant outlier:

This chart shows three things. The top line, in blue, shows the results for the RAND poll from 52 days before the election through Election Day. The purple line shows the RCP poll average over the same time period, while the horizontal line shows the eventual result. As you can see, the RAND poll showed some pretty freakish results, popping out results as many as six points at variance to the RCP Average.

In the end, though, the RAND poll basically got it right. The national polls (though not so much the state polls) were off in 2012. During the closing month of the campaign, they showed, on average, a 0.3 point Romney lead. The RAND poll, by contrast, showed a 3.8 point Obama lead – which was almost exactly correct.

Does that mean the Times poll will be correct this year? Absolutely not. We should treat it as one poll among many, and should note its outlier-ish tendencies. It may be worth watching for trend lines. We might also note that this cycle, it runs contrary to both the national and the state polls, and tends to be off the RCP Average by an even larger margin.

At the same time, though, we should recall that almost all of the objections lodged against the poll could have been lodged against it in 2012. Many were. The poll may well be flat-out wrong in 2016, but its history cautions heavily against dismissing it outright.

Update: Researchers from RAND wish to clarify that they continue to operate a panel survey and that the current L.A. Times poll is not affiliated with that panel. In addition, they suggest that current Times researchers have tweaked the methodology in a way that could over-weight certain under-represented sub-populations in this cycle.