Ross Clark Ross Clark

How robust was the evidence for lockdown?

(Getty images)

Ever since it was first published in May, the Office of National Statistics’ weekly infection survey has been looked upon as the gold standard of Covid data. It is based on swab testing of a large, randomised sample of the population who are tested repeatedly to see if they are infected with the virus – the results from which are scaled up to arrive at an estimate of incidence of the disease in the population as a whole. 

Being a randomised sample, it does not suffer from the drawback of the daily Public Health England figures for confirmed infections – which are heavily influenced by how many tests are being conducted. As the number of tests has expanded, so, too, the number of confirmed infections has risen.

However, the most recent edition of the ONS’ infection survey shows just how fickle the data can be – and how difficult it must be for the government to make decisions when data is so subject to change. 

Each week, the ONS has produced two graphs: one showing the ‘officially reported estimates’ for the number of new daily infections per 100,000 people over the past seven days. Beside that, it also publishes a graph of ‘modelled estimates’ – which are adjusted for such things as false positives and false negatives and show a smoothed-out line of how the ONS thinks infections have changed day by day. Until last week, the two graphs broadly agreed with each other, as you can see from this edition published on 30 October


I choose this edition because it is the latest one which would have been available when the Prime Minister and cabinet made their decision to place England in a second lockdown, a decision made the following day. 

Both graphs show the infection rate doubling in the two weeks to 17 October, the latest date for which data was then available. The government has been criticised for basing its decision on the ‘dodgy graph’ which claimed that deaths could rise to 4,000 a day by December – an estimate which was already out of date by the end of October. But the ONS data could, on its own, have been used to justify a second lockdown.

However, look at the latest edition of the ONS’ infection survey, published last Friday, 4 December,and something very off seems to have happened. The two graphs – the ‘officially-reported estimate’ and the ‘modelled estimate’ no longer agree.


In fact, they show a very different picture. The modelled estimate now suggests that infection rates in October were much lower than previous thought – indeed, it suggests that the infection rate hardly changed throughout the whole month. 

While the graph published on 30 October could be used to argue for an immediate lockdown, that published on 4 December suggested there was no emergency and that the Tier system might have been given more time to work. As it happened, the next edition, published on 6 November, showed that new cases had begun to fall even before lockdown was enacted.

On the question as to why the two graphs are suddenly so different, the ONS says: 

‘We publish the full back-series of modelled estimates for transparency and these should not be considered ‘revised official estimates’. We have always advised people to use our official estimates as originally published as these are unaffected by the effects of policy changes that took place after publication.’

In other words, they seem to be saying: ignore the graph we published last Friday and only take notice of the figures we published back on 30 October. In which case why do they publish the second graph at all?

From the beginning of this crisis, the government has tried to ‘follow the science’ – only to realise just how uncertain that science is. There have been wide criticisms of the government’s reliance on predictive modelling, by Imperial College and others. But the sharp change in the ONS graphs shows that modelling can be just as unreliable when it is used to try to tell what has already happened.

I don’t envy ministers having to make decisions with huge economic and social implications, based on very uncertain data. But it ought to be added that the government’s decision to press ahead with mass community testing is going to make its life even harder. The Lateral Flow Tests used in community testing have far higher rates of false negatives – picking out fewer than 60 per cent of cases when used in a community setting – than the laboratory PCR tests used, for example, in the ONS infection survey. Yet they are going to feed a huge explosion in new data which is going to be even more difficult to read than the data we have already.

Comments