The Flathead Valley’s Leading Independent Journal of Observation, Analysis, & Opinion. © James R. Conner.

 

12 October 2012

Latest poll averages: Tester tied, Bullock ahead, Gillan behind

Updated 16 October 2012 to include the 14 October Rasmussen numbers: Tester and Rehberg tied at 48 percent each, Libertarian Dan Cox at 3 percent, and 2 percent undecided.

A few words of caution. There is a general belief, probably based on Nate Silver’s post election analysis of the accuracy of the polls in 2010, that the Rasmussen Poll is biased in the Republican direction while PPP’s polls are biased toward Democrats. In this context, bias does not mean prejudiced, or a deliberate weighting of the poll in a particular direction. It simply means that the methodology produces a consistent and measurable effect. All pollsters try to eliminate bias, so methods get tweaked after each election in an attempt to improve accuracy. We won’t know until after the election if Rasmussen or any other poll still has the bias it had two years ago. At this point, we should accept the poll at face value and be mindful that 500 likely voters is a mighty small sample with wide error bars.

Updated. The averages of polls (graphs below) released since Labor Day contain sobering messages for three Montana Democrats:

The results of two polls — Public Policy Polling (10 October; 737 sampled; PDF), and Montana State University (30 September; 477 sampled; PDF Day 1, PDF Day 2) — were released this week. PPP’s finding were unremarkable, and consistent with the poll’s 19 September finding. MSU’s poll found Rehberg, Hill, and Daines leading, and reported (a) very high percentages of undecideds, and (b) low Libertarian percentages, compared to the other polls (PPP, Mason-Dixon, and Global Strategies).

MSU’s high undecided and low Libertarian numbers raise caution flags. I have no doubt that the poll’s authors reported exactly what they found. It’s possible the numbers are outliers, flukes. It’s also possible that there were problems with the poll’s design or administration.

I did not weight the averages for sample size, nor did I attempt to calculate the margin of error for the combined samples. My back of the envelope calculations suggest there’s a high probability that the samples were independent of each other, that there was very limited if any inadvertent cross-sampling, but each poll probably used different methods to weight demographic characteristics, so the samples should not be combined. (Rather than going into the math, I’ll simply note that quadrupling the size of the sample halves the margin of error.)