The Perils of Polls and Other Lessons in Amateur Punditry
February 14th, 2011
---- 2 ±±±± 1 ±±±± 0 ±±±± 1 ±±±± 2 ++++
Joan Bryden of the Canadian Press has written two of the most important stories in political journalism recently, all the more significant for daring to out the dirty little secret of the racket that's been developing the past few years. [I'm quoting at length, but there's still more of value to read in the full-length stories themselves.]
There's broad consensus among pollsters that proliferating political polls suffer from a combination of methodological problems, commercial pressures and an unhealthy relationship with the media.
Start with the methodological morass.
"The dirty little secret of the polling business . . . is that our ability to yield results accurately from samples that reflect the total population has probably never been worse in the 30 to 35 years that the discipline has been active in Canada," says veteran pollster Allan Gregg, chairman of Harris-Decima which provides political polling for The Canadian Press.
For a poll to be considered an accurate random sample of the population, everyone must have an equal chance to participate in it. Telephone surveys used to provide that but, with more and more people giving up land lines for cell phones, screening their calls or just hanging up, response rates have plummeted to as little as 15 per cent.
Gregg says that means phone polls are skewing disproportionately towards the elderly, less educated and rural Canadians. Pollsters will weight their samples to compensate but that inevitably means "messing around with random probability theory" on which the entire discipline is based.
So why do pollsters continue to trumpet their imperfect data to the media?
Money. Or more precisely, the lack of it.
When Gregg started polling in the 1970s, there were only a handful of public opinion research companies. Polls were expensive so media outlets bought them judiciously.
Now, Gregg laments almost anyone can profess to be a pollster, with little or no methodological training. There is so much competition that political polls are given free to the media, in hopes the attendant publicity will boost business.
Turcotte says political polls for the media are "not research anymore" so much as marketing and promotional tools. Because they're not paid, pollsters don't put much care into the quality of the product, often throwing a couple of questions about party preference into the middle of an omnibus survey on other subjects which could taint results.
And there's no way to hold pollsters accountable for producing shoddy results since, until there's an actual election, there's no way to gauge their accuracy.
"I believe the quality overall has been driven to unacceptably low levels by the fact that there's this competitive auction to the bottom, with most of this stuff being paid for by insufficient or no resources by the media," concurs Graves.
"You know what? You get what you pay for."
The problem is exacerbated by what Gregg calls an "unholy alliance" with the media. Reporters have "an inherent bias in creating news out of what is methodologically not news." And pollsters have little interest in taming the media's penchant for hype because they won't get quoted repeatedly saying their data shows no statistically significant change.
"In fact, they do the exact opposite. They will give quotes, chapter and verse, and basically reverse and eat themselves the next week," says Gregg.
"You just say, 'Oh geez, the gender gap is gone' (one week) and then, 'Oops, sorry, it's back (the next week).' It's unconscionable."
Gregg, who rose to prominence as a Progressive Conservative party pollster, recalls the initial "giddy" feeling of being treated like a media celebrity. Now, he feels partly responsible for creating a bunch of "mini-Frankensteins."
"You've got this situation where the polling profession has sort of fallen in love with the sound of its own voice and says things, quite frankly, that the discipline can not support."
But it's not all the pollsters' fault. Turcotte says journalists used to be more knowledgeable about methodological limits and more cautious about reporting results. Now, they routinely misconstrue data and ignore margins of error. [emphasis added]
The story had all the more relevance for me after spending a weekend in Toronto where I met still other professionals in the opinion research business, and we wound up discussing virtually the same problem, including the way these media polls have been extended to produce some highly speculative but sensational seat projection schemes.
To recap, there are numerous sources of potential error in polls:
- It is becoming increasingly difficult, and expensive, to draw a truly random and representative sample.
- The theory behind the calculation of margins of error is based on the ability to draw a random sample. Drawing a sample from an online panel might be one of the ways to reach people not now being reached by telephone surveys, but it is not a random procedure, and the industry association does not allow its members to report margins of error on those samples, though notably it doesn't sanction the violators either.
- Even in a perfectly random sample, there would be normal random variation, error and statistical noise.
- Regional sub-samples of the small national samples of the size that can be afforded in the Canadian marketplace are wholly unsuitable for the task of PREDICTING ANYTHING by way of individual seat outcomes with any kind of accuracy.
- Comparing the results of voting intention questions in polls conducted now, with regular sample sizes and when the public is barely paying attention to politics, to those in polls reported on E-1 to E-3 with an attentive public, and topped-up sample sizes and everyone on their best sampling behaviour (because they know they'll be judged on the results), is not a valid basis for drawing conclusions about the overall "house effects" of one polling firm or another.
- The differences between results from various polling firms will have as much to do with contact methodologies (landline vs cellphone vs online, and/or the combination thereof), the amount of effort (and therefore cost) that goes into drawing a sample, the wording of the question(s), the ordering of the question, the other topics of the omnibus survey, and so forth.
… and in the seat projections based on them:
- Built into the previous election results in individual ridings is underlying random variation (weather conditions, changes in candidates, etc.) that can't be separated out from party results themselves.
- Even the most meticulous methodology can't exactly capture the patterns for a single riding, when it's designed to predict a large number of ridings. Compared to a population of ridings, any given riding will have inherent idiosyncracies, which create random variation that produces errors in estimation.
- A methodology that wants to be taken seriously has to quantify, in a replicable way, the variables and weights that it introduces to modify the application of national polling results to past election results … and the basis for those assumptions and weights has to have a validated evidentiary basis in the literature.
- No-one can think of any study that has ever validated the use of a party's or candidate's "resistance to polling trends" in a given riding, as a valid weighting factor. It could just as easily reflect the lack of competition, a bad snowstorm, or other random variation, and we have no way of knowing the difference. There is some work done on incumbency, on the other hand.
- The greater the number of parties whose votes need to be predicted in order to project a seat's outcome, the greater the scope for random error.
- A seat projection based on a poll today should never be extrapolated to predict the outcome of an individual riding election after a five-week or longer election campaign. And yet this is done constantly, and sensationally to great effect, in quite an irresponsible and unscientific way. Projection ≠ prediction. This will also be important to remember when the strategic voting sites crank up into full gear and start extrapolating national polling results down to a riding level, with predictably bad results.
For those who don't know, because they've never worked in the polling industry or professionally in politics, political parties don't waste their money conducting "beauty contest" "horse-race numbers" surveys: it doesn't tell them a thing and isn't worth the money*.
[* ... unless their purpose is to leak the numbers in order to try and drive a certain result, in which case there's a good chance they never even conducted the poll in the first place, or else heavily skewed the results by asking leading questions first. Don't think this has never happened; it's why rules were placed in the Elections Act to prevent the reporting of polls during the final days of a campaign, by the very politicians who are all too familiar with the history of such practices.]
What the parties do do was laid out long ago in an article written by Frank Luntz in the Journal of Campaigns and Elections that I wish I could lay my hands on now, but hasn't changed substantially in the intervening 20 years I don't imagine:
- Conduct a large-scale Baseline Survey, with a sample size in the order of what Michael Marzolini of Pollara is reported to have produced for the Liberal Party recently (I heard 5,000 nationally), that allows for *significant* regional and demographic sub-samples. Its purpose is to
- identify each party's perceived strengths and weaknesses, and those of their leaders
- find the issues on which each party is seen as credible, and those on which they are not
- test people's current inclination, and overall willingness to ever consider supporting each party
- find which parties voters would never consider supporting, and why not
- locate second choices, firmness of support, and party of identification ("thinking of federal politics, which party do you normally feel closest to")
- ask questions designed to uncover what issues might drive a person's vote one way or the other (since not all issues are vote-determining issues), and try to find the best ballot question from the perspective of your own party (and what ballot questions competing parties will be trying to create)
- ascertain the more permanent attitudes that underly and perhaps activate voters' current opinions and beliefs
- From the baseline survey, the parties will be able to identify the regional and demographic characteristics of their likely swing voters (and those of the voters most at risk of defecting). These demographic groups are then studied in greater detail through focus groups that test what language moves people, their reaction to advertising "creative", and build a qualitative understanding of their attitudes and motivations.
- Also from the baseline survey, some more targetted tracking samples can be drawn to see how those target groups are reacting to developments on an on-going basis.
Of course, somewhere in those surveys a voting intention question is going to be asked, but it's as much to generate categories of respondents as anything else. A political party certainly doesn't make decisions – such as whether it's going to support a federal budget – on the answer to that one question alone, and to suggest that would happen betrays a real absence of relevant experience.
So: stop reading "the polls", stop obsessing about "the polls", and for goodness sake stop writing that "the polls say this will happen or that will happen", because they don't say any such thing. And take amateur seat predictions having a zero-general-election track record with as big a boulder of salt as Joan Bryden's sources advise taking the polls. "The polls" didn't predict the Conservatives would win Montmagny–l'Islet–Kamouraska–Rivière-du-Loup, or that the Liberals would win Winnipeg North either. Campaigns matter, and outsiders don't know what ridings political parties are going to target or send resources to yet, though we can certainly draw some conclusions now (and no, it's not based exclusively on whether they were "close" last time or not either).
By way of demonstrating the presence or absence of variability in party results over all ridings across the past 4 elections, I have assembled this chart showing party vote as a percent of eligible voters (with non-voters in purple), in each of five regions (Atlantic, Quebec, Ontario, Prairies, BC) for each of the five main political parties. For ease of presentation, the results of the Canadian Alliance and Progressive Conservative parties were added in 2000 and are displayed with the Conservative Party's results from 2004-2008. The Bloc was also shown on the same chart as the Green Party, given the almost complete lack of overlap between their bands of support.
[Click on image to open a full-sized version.]
This is how many data-points you have to predict accurately in order to extrapolate predictions from a seat projection based on a poll, i.e., (308 or so x 4) + 75 assuming turnout is constant (which it hasn't always been). Notice the outliers amongst the Liberals on the prairies and BC (fourth and fifth graphs down in the left-most column), the NDP in the Atlantic and Québec, the Conservatives in Québec (and notice the four seats well below their range in western Canada: Churchill, Winnipeg North and Centre on the prairies, and Vancouver East in BC).