Was Donald Trump’s win unexpected? Not if you followed the betting markets, which had Trump at a two-thirds chance of winning days out from the election. The polls, on the other hand, told a different story. Analysis of polls carried out in 15 competitive states in the three weeks before last Tuesday’s election shows that whatever the method of polling used, there was a clear and consistent bias in favour of the Democrats.
Pollsters spent an estimated half a billion dollars (£388 million) on this election, but most polling methods were still biased towards Kamala Harris by around three percentage points. One method – recruiting participants by mail – managed to be wrong by a whopping 13 points. Another, recruiting participants while they were online or using an app, was only off by around one point.
Some of the greatest names in forecasting ended up with egg on their face. J. Ann Selzer had Harris with a three-point lead in Iowa in the final weekend before polling day – an outcome that surely would have handed the Democrats the election. Trump ended up winning the Hawkeye state by a 13-point margin.
In fact, it seems that pollsters got closer to the truth in Trump’s even less expected 2016 victory – but were still three points off then too. The problem appears to be around who the pollsters could reach. Democrats answered their phones while Trump voters let theirs ring out.
Another issue was the failure to estimate just how well Trump would get his vote out. The final poll aggregates from FiveThirtyEight, Nate Silver, the New York Times and Real Clear Politics all overstated Kamala Harris’s vote share while significantly underestimating Trump’s – highlighting the bias in the underlying polls.
Pollsters seemed to miss the shift of Latino voters coming out for Trump too – after missing the white vote in the previous two elections. Having won roughly a third of the male Latino votes in 2016 and 2020, an exit poll for NBC suggests Trump took over half that vote this time.
Finally, modelling contributed to the polls missing the result. To improve the accuracy of surveys after previous debacles (in 2012, the polls underestimated Obama’s vote by four percentage points) pollsters have used increasingly complicated methods to adjust their samples to give a truer reflection of the population. But as we learnt during Covid, models don’t often adapt well to a changing world. Would they have accounted for the surge in young white men coming out for Trump, possibly as a result of his appearance on the Joe Rogan podcast, for example?
Not everyone did so badly though. James Johnson’s firm JL Partners correctly called the election for Trump – though it still underestimated his electoral college success. Worried that Trump voters would be missed from phone surveys and zoom panels they used surveys as in-app games with prizes for completing the poll. It seems to have worked.
But are we being too harsh on the pollsters? The average error in swing states was not that unusual (around three percentage points). What seems to have gone wrong though is that Trump won every single time in states where statisticians thought the result would be a 50/50 toss up. Pollsters simply did not expect the extent of his red wave.
We might accept that being within the margin of error in key battleground states is acceptably close then and that it’s just unfortunate – for pollsters and Democrats alike – that the coin toss fell the way of The Donald every time. But many would say that simply isn’t good enough: when it comes to polls there are no prizes for second place.
Watch more US election analysis on SpectatorTV:
Comments