MayDay 2011: The Futility of Trusting Polls
There are some serious issues raised with this article, but in effect, it comes to the conclusion that polls are useless and bought.
What does this mean for any sense of democracy in Canada when corporations are able to manipulate public opinion into believing the lies that Stephen Harper tells us or that the NDP is ‘surging’ in the polls? What’s the truth behind any of this?
If there are now more cellphones than there are landlines and youth tend to own the former over the latter, and youth tend to lean towards progressive causes, then what does this tell us about the seemingly disproportionate percentage for the Conservatives? It’s a lie that they rely very heavily on, that what it tells me.
It’s why it’s vital that youth vote in this election.
Here are a couple of other issues that I have with polls:
- They tend not to differentiate between urban and rural voters, with urban voters having more cell phones than rural voters.
- They don’t collect data related to what’s happening on social platforms. The tools exist to monitor discussions – many of which are happening in the tens of thousands – but they don’t and the media is doing a poor job of reporting on this. Why? Because the discussions really don’t favour the Cons.
- The questions can be skewed to ensure a specific reaction. Don’t believe me? How about this question that I got some months ago: “Is Jack Layton a bad leader because he supports the Taliban or because he doesn’t support tough on crime policies?”
And here’s the big question: are Canadians being manipulated into thinking that the NDP is doing much better than they are in order to confuse voters and ensure a Conservative majority? I don’t want to take the steam out of Jack’s train, but I’m very concerned about the manipulation that might be behind all of this.
We’ll know for sure on Monday, but in the interim: DON’T TRUST POLLS. The only real poll is the result that we have on Monday, May 2.
Here’s a sample of data from the same day (April 20):
BQ CON GRN LIB NDP ERROR± FIRM 6 36 6 23 25 2.2 Forum Research 6 43 4 21 24 3.1 Ipsos Reid 6.5 34 7.8 24.7 24.7 2.1 EKOS Research 7.5 39 3.4 26.7 22.1 3.1 Nanos Research
Here’s the text from the CBC article (with bold/italics mine):
How can there be an almost nine-point difference in the Conservative vote between Ipsos Reid and Ekos?Or more than four points for the Greens between Ekos and Nanos, and more than five for the Liberals between Ipsos and Nanos?
If the pollsters are so far apart, how can we rely on their interpretation of what is happening “out there” in the Canadian electorate?
The question is important because we have reached the stage of the campaign where polls have become the story. The platforms have been released, the promises rolled out, the debates are a fading memory.
This campaign needed a new story line to carry it to election day, and it has found that in the so-called NDP surge.
Campaign coverage is now focused almost exclusively on the horse race and the strategic decisions each party is making to come to grips with the new reality that the pollsters assure us we are now confronting.
But what if they’re wrong?
Dirty little secret
It’s a question pollsters themselves have been asking lately.
In recent months, several prominent Canadian pollsters have been raising some pretty fundamental questions about their industry.
The most provocative critic has been Allan Gregg, chairman of Harris-Decima, which provides political polling for the Canadian Press. He is also a regular member of The National’s At Issue panel on CBC TV.
Gregg has been doing political polling since the 1970s and in an interview with the Canadian Press he said that “there’s broad consensus among pollsters that proliferating political polls suffer from a combination of methodological problems, commercial pressures and an unhealthy relationship with the media.
“The dirty little secret of the polling business,” he went on, “is that our ability to yield results accurately from samples that reflect the total population has probably never been worse in the 30 to 35 years that the discipline has been active in Canada.”
Amongst the methodological problems that Gregg and others identify is the incredible shrinking response rate for polls conducted by telephone.
Thirty years ago, about 70-80 per cent of people called by pollsters agreed to be surveyed. Today, that rate is under 15 per cent and Gregg believes those people tend to be older, less well-educated and more rural than the general population.
But for the purposes of their polling, researchers are obliged to assume that the 15 per cent of callers who agree to spend 20 minutes talking to them are representative of the 85 per cent who are too busy or whatever to participate or who never pick up at all because they can identify a pollster through Caller ID.
The growing number of households without landlines also poses significant challenges.
There are now more cellphones than wired phones in Canada (25 million vs. 17.5 million), and those cellphone numbers are harder for pollsters to get. That leaves a large number of people, many of them younger, whose views may never be surveyed.
In both these cases, researchers have developed “models” that they hope can compensate for these and other instances where polls are conducted on an unrepresentative sample. But the accuracy of these models remains in question.
The basic methodological assumption of the polling industry has always revolved around random probability sampling, meaning that everyone has an equal chance of being interviewed. That is now clearly no longer the case.
Firms like Nanos, Decima, Ipsos-Reid and Ekos have become household names in Canada because of their high-profile political polling.
But political polling is a loss leader for these companies. They offer their services to media outlets at a deeply discounted rate, or sometimes even for free, because the profile they develop at election time helps them in their core business, which is traditional market research.
They make their money asking people what margarine they spread on their toast, not who they are likely to vote for.
In fact, that voter preference question is often buried inside a longer survey about some completely unrelated subjects.
Over the years, researchers have discovered that where the political question is placed in the survey, and what else is being asked of the respondent, can affect how a person answers the question.
But these placement concerns are rarely factored in to the results.
Many industry veterans now think that too much poorly executed and poorly resourced polling is causing significant harm to the industry.
“I believe the quality overall has been driven to unacceptably low levels by the fact that there’s this competitive auction to the bottom, with most of this stuff being paid for by insufficient or no resources by the media,” Frank Graves of Ekos told the Canadian Press in February.
“You know what? You get what you pay for.”
Unhealthy relationship with the media
Still, no one expects the media’s love affair with opinion polls to end any time soon. Polls are the best news that money can buy.
They keep the campaign story moving along, even when everything else has been thoroughly talked out.
In this campaign, there were 19 days between the English-language leaders’ debate and election day. In 2008, there were only 12.
Also, in 2008, the last platform was dropped just six days before voters went to the polls. This year, it was more than three weeks before. These kinds of gaps require new story lines.
But the problem with using polls here is that, too often, the reporting of them is based on creating drama where none exists.
In the process, non-trivial issues like margin of error, problems with samples and methodologies tend to get pushed aside.
Is that what’s happening with the story of the NDP surge? That will be the subject of the next post.