Is the polling meltdown in the UK a stern warning for Ireland [and #BrexitRef]?

So the British Polling Council is publishing Professor Patrick Sturgis’ report “The Inquiry into the Failure of the 2015 Pre-election Polls…” It doesn’t look good for the pollsters…

British Polling Council report

For the first time we may be getting a clear glimpse of just who the shy Tory voter is. It may be that they aren’t shy at all, more that they are just too busy working to answer the phone or say yes to a pollster’s enquiry.

Here’s the thing, not only did they get it wrong at the end, it looks like they were getting it wrong from at least 2013 (remember, about the time Miliband forgot to mention the Tory’s lagging deficit?).

Polls have become so unquestioningly central to political debate that in many cases debates about policy has been displaced, unsighting the electorate on the material choices facing them.

For Jim Murphy speaking on BBC Radio Four on Sunday argued that it lead to a sense of complacency within Labour, whilst given the prospect of a hung parliament the Tories heavily moderated their own manifesto commitments.

In Ireland right now everyone (including Slugger) is at the what do the polls say about the actual outcome game.  However there’s no equivalent to YouGov, so emailed surveys are not such a factor here.

Irish pollsters generally extrapolate from a very large pile of ‘don’t knows’, and then reassign them to parties on a national basis. The challenge however is how you scrape those national percentages back into constituencies.

PR STV renders the multi member constituency unit a lot more unpredictable than FPTP, with some voters identifying much more with their candidate than his/her party.

In fact the multi member constituency is a nightmare for policy based politics. As one wit told Hugh Linihan recently, the most negative thing you can do in Irish politics is to tell local voters that your party colleague is safe.

Take Irish Labour party for an example. Even with an 8% rating in the country includes large swathes of Ulster and the west that simply don’t vote for Labour. So there may be pockets where they do very well. Ditto Fianna Fail. At a certain rating too, +/- 1% or 2% can add or subtract four or five seats.

In truth the public feedback loop in politics now operates at a speed and in a manner we’ve not seen before. Perhaps voter volatility is now a feature rather than a bug, brought on by the recession or anger at a wealthy elite that many feel have not faced the consequences of their actions.

If the UK polls – which journalists in many ways have been using as a substitute for in house sampling (of real people) – failed in 2015, what guarantee do we have that they aren’t already getting a badly skewed reading on Brexit? 

Unless they are really bad polls and their extrapolations are (mostly) for fun. The problem is the literal interpretation of them. They can foreshorten the perspectives of those who over-rely on them, encouraging focus on hypotheticals and squeeze limited space for real issues.

Another log on the fire perhaps for those cynics who are found of saying, “don’t vote, because the government always gets in”… (ie, whomever it is it is always the one you didn’t want…).

Meantime,if you think the polls that over state your favourite party’s position are your friends: well, think again.

, , , ,

  • Kev Hughes

    Interesting enough read. I think you could do no worse for further reading than this which was in the New Yorker in November 2015

    http://www.newyorker.com/magazine/2015/11/16/politics-and-the-new-machine

    But, regarding the last Westminster elections, Rawnsley at the Observer has the right idea when he says he should have looked at the numbers when it came to which leader the electorate preferred and who they trusted with the economy; those are the two in a normal, functioning democracy that really count:

    http://www.theguardian.com/commentisfree/2016/jan/17/opinion-polls-matter-despite-wrong-predictions-general-election

  • Reader

    Mick Fealty: For the first time we may be getting a clear glimpse of just who the shy Tory voter is. It may be that they aren’t shy at all,
    If they were actually ‘shy’ (euphemism…), then the exit polls would have been wrong too. But the exit polls were fantastically accurate.
    However, I think there are actually ‘shy’ voters on this side of the North Channel. How do exit polls compare with normal polls and actual results over here?

  • Robin Keogh

    Polls are fascinating but frustrating at the same time. There is so much emphasis on them in the run up to elections that we tend to almost lose sight of policy which is a point I think u have made well in your post. There is also the element of trust because the pollsters are not visible to the reader, we are taking their word on it so to speak. And without sounding overly cynical, sometimes the variation in results between polls released at the same time and collated at the same time can leave one scratching ones head quite aggressively, wondering if the results are possibly skewed in a certain direction to give a set picture. This is the first time I have been in the thick of an election campaign charged with the responsibility of PR. What fascinates me is the amount of private polling parties undertake, particularly in areas where one would imagine is not fertile ground for the party involved. good news is always good news while bad news must certainly be a mistake.

  • AndyB

    I still stand by what I said back in March 2015, and which I had been thinking for some time.

    UKIP was doomed from the beginning for any number of reasons.

    First of all, it was a non-proportional system. An even spread of support will gain you zero seats – otherwise known as the Liberal Democrats’ problem since their inception.

    Secondly, as a side-effect, non-proportional systems lead to tactical voting, and that includes supporters of smaller parties voting to keep the incumbent as the less bad option or whichever opponent has the most hope of unseating them – regardless of their personal preferred party, which will be the most common answer to “who will you vote for at the election” regardless of tactical voting intentions. Perhaps potential tactical voters are hedging their bets to see what happens in the polls before they make a final decision as to whether to vote with their hearts or not.

    What I missed was the logical consequence of all of the above: UKIP voters are more likely to tactically choose Tory, and polls were predicting that they would do better proportionately than the Lib Dems. Lib Dems enjoy some pockets of support, as was seen in the seats they kept, but were heavily punished for not getting in the way of the Tories sufficiently.

    The coalition had done a lot of damage to the Lib Dems, and to three party politics in general, but with “if you vote Lib Dem you get Tory” being inadequate to counterbalance “if you vote Lib Dem you don’t know what you get so vote big two” and “if you vote Lib Dem you might get Labour” and the UKIP effect, the outcome was inevitable. I could have been famous if I’d made that connection.

    (The other thing I got right: the DUP was never close to having an impact on the Government!)

  • Ernekid

    In Scotland the polls said that the SNP were on course for a stonking majority and funnily enough they achieved it, They are set for a massive victory in the upcoming Holyrood elections with a 52% support http://survation.com/8926-2/

    Maybe the problem with polling is an English phenomenon

  • MainlandUlsterman

    I’ve never worked in polling though I do work in the research industry; and clearly the election was a disaster for pollsters. A few words though to temper the general ridicule of the poor pollsters. I think there has been some wilful ignorance shown by some journalists paid to communicate what polls mean to the public – and some of the blame for the disappointment lies with them for leaping on the figures too unthinkingly:

    1. polling is only ever an estimation, it’s not an exact measurement and never can be

    2. any figure in a poll has a margin of error, usually plus or minus 3 per cent (one explanation is here – https://ropercenter.cornell.edu/support/polling-fundamentals-total-survey-error/). So a poll saying Labour is on 30 per cent and Tories 36, for example, would be technically still correct if they both ended up on 33 per cent – that’s within the prediction of the poll. People kind of get that, I think, and kind of don’t! I think a lot of people just want to look at the headline figure and maybe the trends. But all a poll figure of ’30 per cent’ is saying is, it could be anywhere between 27 and 33, WE DON’T KNOW. ’30’ is just a handy summary for a more complex finding.

    3 This is the big one for 2015 – I heard pollsters VERY clearly and VERY repeatedly in the run-up to the election stressing to anyone paying attention that the polls for the 2015 election were going to be unusually difficult to sample and therefore should be treated with particular caution. I heard several pollsters, well before the polls were shown to be flawed, describe a perfect storm of factors that made the polling hard to get accurate. I got the impression some media were thinking, “yeah, whatever, the usual caveats, yada yada yada” and not actually noticing pollsters’ change of tone and the clear alarm signals they were sending out.

    Among the factors that were new since the 2010 General Election were: the scale of the rise of UKIP; the impact of the referendum on party allegiance in Scotland; two parties going into the election as coalition partners.

    Of course polls are there to tame the unpredictable – but when pollsters do weightings etc to scale up what 1,000 people told them to a national picture, they have to rely heavily on what they’ve seen in the past to predict it right. In 2015 we had several factors that introduced huge doubt. (It’s not a simple measuring job – it’s incredibly complicated statistical modelling, way beyond my ken or even the ken of most statistical researchers). That they got it wrong this time is disappointing for them, but actually, some of them weren’t so far off. And really people should have been much less surprised that the polls didn’t get this one spot on. Some elections are easier to poll for than others. It pays to listen to what the professionals doing it themselves say about how firm or shaky the data is.

  • Gingray

    Mick
    This is of interest:

    http://www2.politicalbetting.com/index.php/archives/2016/01/14/the-ge2015-polling-fail-put-down-to-unrepresentative-samples/

    “The report goes on to suggest there are two main reasons why the sample of respondents interviewed by BSA 2015 proved to be more representative than those obtained by the polls.

    More time and effort is needed to find Conservative voters. Polls are conducted over just two or three days, which means they are more likely to interview those who are contacted most easily, either over the internet or via their phone.

    The evidence from BSA suggests that those who are contacted most easily are less likely to be Conservative voters. The survey made repeated efforts during the course of four months to make contact with those who had been selected for interview. Among those who were contacted most easily – that is they were interviewed the first time an interviewer called – Labour enjoyed a clear lead of no less than six points, a result not accounted for by the social profile of these
    respondents. In contrast, the Conservatives were eleven points ahead
    amongst those who were only interviewed after between three and six
    calls had been made.

    ————————

    With a larger sample and more effort to track down the individual chosen, random selection is by far the best way to poll.