Blog

Response To Yesterday’s Times, YouGov Articles and YouGov’s published research about Survation’s Scottish Independence Methodology

This week Peter Kellner, President of YouGov, published an article both on YouGov’s website and in The Times newspaper, with his views on why opinion polls in Scotland by different polling companies have produced consistently divergent results on the question of the Scottish independence referendum. His argument was that YouGov’s polling in Scotland was superior to that of Survation (and other polling companies) due to other companies weighting results to 2011 vote, which he claimed was inaccurate due to an excess of core SNP voters in all our raw data and a lack of those floating voters who supported Labour in 2010 but the SNP in 2011 (and are hence more likely to be voting “No” in the referendum).

This argument was based around a comparison between a new YouGov poll specially undertaken for the purpose and a set of Survation data tables showing recall of 2010 vote and 2014 vote taken from our last Scottish Omnibus poll, which was produced in this solely for the purpose of aiding in our own internal methodological review regarding voter recall. As it happens, Peter Kellner has chosen to use this to publish his own analysis of our methodology, to which we therefore here reply.

Use of 2011 Weighting

Certainly Peter Kellner is correct in as much as there are some significant divergences in Scottish opinion polling between the various polling companies, with YouGov producing consistently higher figures for “No” than the other polling companies regularly operating in Scotland. In fact, YouGov’s polling figures have been very close to the Survation poll conducted on 29-31st January, which was the subject of active discussion between ourselves and BPC President and expert psephologist Professor John Curtice (among others) over the question of our use of a 2010 general election vote weighting rather than 2011 Holyrood vote, the conclusion of which was that 2011 Holyrood weighting was widely recommended for use in Scottish opinion polling (see here for Professor Curtice’s thoughts on the issue).

Since that time, Survation has consistently adopted what was already as an example, ICM’s method of using 2011 vote as the baseline for past vote weighting in Scottish polls on the basis that, to quote Professor Curtice, “evidence strongly suggests that people have a better memory of what they did in 2011”, compared to 2010 vote which is subject to considerable false recall. For this reason the argument that our 2011 vote weighting method is poor simply because it leads to results that do not fit recalled 2010 vote is unconvincing, because we use 2011 vote precisely because it is better recalled than 2010.

Repeated Survation polls since that time have further added to this evidence, as the raw figures in all our polls since January have had 2011 recalled vote very close to the actual results and, consequently, requiring little in the way of adjustment by our method. In this sense it is not correct to describe our use of 2011 weighting as “causing” the difference in our final results from YouGov’s figures, as our final results would have been very similar had we used no past vote weighting at all. At most, it could be argued, the raw data from the panels we draw on are too “nationalist” in character, as presumably would also have to be the panels used by ICM and Panelbase, and 2011 vote is failing to correct for this imbalance – this is a risk which I am sure all companies including YouGov are always fully aware of with regards to potential imbalance in their own panels and which we and no doubt all other pollsters take a range of measures to continually try and guard against.

Issues with Panel Imbalances

There certainly does appear to be a difference in the raw (unweighted) data in YouGov’s polls compared with Survation’s. Whilst Survation’s unweighted results in our most recent poll had slightly too many SNP supporters (about two percentage points) and slightly too few Labour (by about three and a half points), YouGov’s unweighted figures in the poll commissioned especially to compare against our methods over-represented the share of Labour voters by approximately 5 points. YouGov have therefore ended up weighting down their Labour voters by a significant amount whilst Survation have weighted up our Labour voters (and down our SNP voters) by a smaller amount.

Peter Kellner might be correct in his estimation that the raw data we are using contains too many “core” SNP 2011 voters compared with floating SNP 2011 voters who previously voted Labour, or it may be that YouGov’s own panel for some reason contains too many Labour voters of one description or another. Alternatively the answer may be that both are partly true, and we will find in September that the final result is somewhere between our figures and those of YouGov. As it is, Peter Kellner provides no plausible hypothesis as to why the raw makeup of YouGov’s panel should be a much closer fit to reality than any panels used by Survation, ICM or Panelbase, so it is hard to draw any real conclusions on this point.

False Recall

Given the significant possibility of false recall in past vote, however, there is no realistic way Survation could make a better correction for any panel imbalances even if they do exist, as it would require a complex segmented analysis combining both 2011 and 2010 vote, which we feel is too unreliable to use safely in such a way, particularly when dealing with small-subsamples of people such as Labour-SNP switchers (as discussed below). If false recall is already considered to be a problem for 2011 vote and is known to be significantly worse for 2010 vote, there can be little justification for Survation adopting a methodology that relies on recall of not one but both these pieces of information. Certainly we are in no way ideologically wedded to the idea of past voting weighting, having abandoned it for our published constituency polling in December 2013 for precisely the reasons of false recall now being discussed.

However, not least because it appears to have only a minor impact on our raw data, we continue to believe that for the time being it is the best option open to us in Scotland-wide online polls and will therefore be continuing with our use of 2011 weighting until the referendum itself, at which point we will review the results.

YouGov’s Approach?

YouGov, of course, enjoy the advantage of having had an existing large panel at the time of the 2011 Holyrood elections from which they collected information about voting behaviour at the time. That said, it is not clear exactly how this information is used or combined with other more recent data collection, as Peter Kellner does not go into detail on what their own weighting methodologies are beyond saying that YouGov “collects large amounts of data from its panel at the time of each election, and as far as possible uses this information, rather than remembered vote from months or even years later.” YouGov’s methodology page states that they use “separate weighting for voters who split Labour at Westminster elections and SNP at Holyrood elections”, targets for which are presumably derived from YouGov’s own internal research in 2011.

Such a method would, however, have problems of its own – as well as the weighting targets themselves being dependent on further polling rather than known demographics, there is the issue of weighting up a very small of number of these “red Nats” (those who voted Labour in 2010 and SNP in 2011) as Peter Kellner describes them.

Were Survation to adopt such a method, Peter Kellner himself notes, we would be weighting up just 16 “red Nats” in our recent poll to the 60 that he believes the correct target should be. Weighting 16 to 60 respondents in any poll would be considered quite a stretch and introduces the risk that you greatly compound any random sampling inaccuracies in the raw data. Hence we would not consider this a wise method for Survation to adopt.

Looking at 2014 European Election Results

Peter Kellner also mentions the recent 2014 European Parliament elections, which he cites as further evidence of YouGov’s accuracy vis-a-vis Survation. It is true that the final Survation and ICM polls in Scotland ahead of the European elections over-stated the SNP’s vote by 7-8 points. However, YouGov’s own final poll over-stated Labour’s share of the vote by 5 points, whilst Survation and ICM’s figures for Labour were much closer to the actual outcome. Again, this might support a theory that YouGov’s panel in Scotland is too strongly Labour even whilst those of Survation and ICM are too strongly SNP.

However, this picture is further complicated by a factor that Peter Kellner did not mention, which is that both Survation and ICM preceded their European Parliament voting questions in these Scottish polls by an independence referendum question, whilst YouGov’s final poll was purely about the European elections. This difference in question ordering may well have had an additional impact in exaggerating the SNP vote share in the Survation and ICM polls, by focusing pro-independence voters minds on the independence debate as an issue of importance and leading them towards mentally associating themselves with the SNP.

As for looking back at the recalled 2014 European Parliament vote in subsequent polls as an indicator of how representative a panel or weighting methodology might be, we believe this is not likely to be a useful approach. Peter Kellner points out that the recalled 2014 vote in the Survation post-election poll was significantly higher than the actual results of the election. As Peter Kellner himself notes, “After low turnout elections, it’s common for more people to claim to have voted that actually did so. Overclaiming may not be evenly spread; so a poll whose sample is perfect may nevertheless produce ‘wrong’ recall figures.” However, he goes on to assert that this “cannot plausibly explain” the deviation of recall from the actual results in our poll.

In fact, 56% of respondents in our post-election poll recalled voting in the European elections, whereas only 33.5% of the electorate actually did in reality, further evidence of this widely known psephological phenomenon of people exaggerating their active voting behaviour (something which Survation were presenting evidence to a House of Commons committee on only last week). A strong skew to SNP voters among these falsely claimed voters could well account for a majority of the ‘recalled’ 2014 vote not matching the actual results, especially if there is reason to believe that actual turnout among SNP supporters was lower than that among other party supporters.

As for Peter Kellner’s claim that YouGov’s method is better because their post-elections poll produced a ‘recalled’ 2014 result that was much closer to the actual results, this holds little water when one takes into account the fact that the number of YouGov panelists claiming to have voted was significantly higher than even Survation’s figures – YouGov found 67% of Scots on their panel claimed to have voted in the European Parliament election, no less than double the actual turnout of 33.5%. Given that fully 50% of YouGov’s panelists were necessarily either lying, mistaken or more unrepresentative of the population when asked about their 2014 vote, the fact that all these panelists taken together produced claimed recall close to the actual results cannot be considered particularly meaningful.

Concluding Remarks

In conclusion, we do not believe that the evidence presented by Peter Kellner in this unusual critical exercise is sufficient to support any claims that:

a) Survation, ICM, Panelbase and others are wrong to use 2011 Holyrood vote as their choice of weighting target in Scotland

b) YouGov’s methods necessarily represent a more ‘accurate’ snapshot of current Scottish opinion

or, consequently

C) That the current state of the Scottish independence race is definitely much less tight than the consensus figures of various pollsters suggest

That said, we do acknowledge the possibility that online panels, both generally and particularly in relation to Scotland where there are a significant number of online activists on both sides of the referendum campaign, are at risk of becoming less representative of the population than they ideally should. Insofar as there are any consistent inaccuracies in either our or YouGov’s results, we suspect this is more likely to be the root cause and we are constantly alert to possible ways of preventing this, as are no doubt all other polling companies. Peter Kellner’s analysis of the possible shortage of “red Nats” in online panel samples generally is interesting and may have some merit, but YouGov’s method of trying to weight separately for these respondents is not something we would wish to pursue, for the reasons listed above.

Nevertheless, we will certainly review our methodology after the actual result of the referendum comes in and we actually have some genuine evidence one way or the other. We may well also conduct our own experiments before the referendum to see how alternative methodologies compare. In general, we have no problem with receiving constructive input into our choice of methodologies from a range of quarters, as we did in January when we adopted 2011 weighting at the suggestion of Professor Curtice and as YouGov themselves did shortly before the European Parliament elections when they adopted Survation’s method of including UKIP in their party prompt for European voting intention, despite Peter Kellner’s article and again self-financed research in January stating that he thought this would be incorrect.

That said, the practice of commissioning your own research solely for the purpose of criticizing another polling company’s methodology seems, in our view, to be unprecedented in the industry and not something that we would wish to repeat.

Patrick Briône

Director of Research, Survation


< Back