Reading UK opinion polls
Having just put something on this gemlog about UK elections, I should probably re-iterate even for my own benefit the rules of thumb when dealing with British polls. In no particular order:
- they're a snapshot, not a prediction
- they come with a confidence interval; 19 times out of 20, the real number will be within that interval, which is generally about 3% and excluded from the headline figure
- the level of support is more important than the difference between two parties' levels of support (which has a broader confidence interval)
- the pollsters have a financial incentive to be as accurate as possible (some political opinion polls are really advertising for other research services)
- systematic errors may affect different polling firms differently, or the same way (an example was miscalculating the likelihood of youngish supporters of Labour to turn up and vote, in 2015 and 2017)
- systematic errors tend not to affect the direction of changes in opinion; seldom do we see consistent changes in support heading in opposite directions with different pollsters
- different pollsters generally have different methodologies, so averaging polls from different pollsters doesn't make much sense
- the average of polls that is important is a time-weighted moving average of polls from the same pollster; if the same firm says a party is doing a bit better three times in a row, that means it's much less likely to be statistical noise
- conventional polls state the average support, weighted for estimated turnout; this only translates roughly into the number of seats a party will win; the distributional efficiency of a party's support, how many seats it wins per vote, is a key metric, and parties differ widely in their performance