Ross Clark

A-levels and the dangers of predictive modelling

A-levels and the dangers of predictive modelling
Text settings

It turns out we’re not quite so in awe of predictive modelling after all. How different it was back in March when Professor Neil Ferguson and his team at Imperial College published their paper predicting 250,000 deaths from Covid unless the government changed course and put the country into lockdown. It was ‘the science’; it was fact, beyond question. Yet no sooner had the A-level results been published last week than a very different attitude began to prevail. How terrible, nearly everyone now says, that an 18-year-old’s future can be determined by an algorithm which tries to predict what grade they would have achieved had they sat the cancelled exams.

I agree with the latter – A-level results have been a farce, which the government is apparently going to try to put right through an announcement this afternoon. Trying to predict grades by teacher assessment, moderated by computer, based on a school’s past record, is no substitute for the real thing: having children sit down and take real exams, which are then marked. Why were those exams cancelled? They could have been held, in June, with social distancing in place. Indeed, candidates have always been placed a healthy distance apart during exams to stop them peeking at each other’s work.

But those condemning the algorithm used to adjust A-level grades, but are happy to swallow the results of predictive modelling on Covid-19 need to ask themselves: what is so different this time to make them sceptical of the art? Reduced to the essentials, Ofqual is just doing what Ferguson was doing when he came up with his figure for 250,000 deaths: both were building an algorithm which, as far as they were able to do, imitates a real-world situation. The only real difference is that in Ofqual’s case it has left a great number of people feeling aggrieved rather than frightened.

None of this is to say that predictive modelling has no uses at all, or that it should never be undertaken. But it has to be read with a healthy dollop of scepticism, because while it may be tempting to think that it is producing a reliable forecast, the almost certain reality is that both the model itself and the assumptions behind it will be flawed. There is already evidence of the impossibility of predicting exam grades – a UCL study published last week attempting to predict grades from past performance found them to be wrong in three quarters of cases.

As for predicting deaths from Covid-19, the best test example we have is from a team at Uppsala University in April who used a version of Ferguson’s model to predict the numbers of deaths in Sweden had the government failed to impose a lockdown. It predicted 90,000 dead by the end of May; in the event the result was 4,350.

We are often said to be living in an age of ‘post-fact’ or ‘anti-science’, but what really distinguishes contemporary attitudes towards science is the inability to separate observation and prediction. For many people, it is all the same thing: it is ‘the science’ and therefore objective truth. Not all modelling is useful and not all observation is beyond reproach – far from it – but we would be a lot better off if we were aware of the serious limitations of modelling and treated it accordingly.