Ross Clark

Two years on, what’s the evidence for lockdown?

Two years on, what's the evidence for lockdown?
Text settings

Did lockdowns save lives? We will never have a definitive answer to this vital question because it was impossible to conduct controlled experiments — we don’t have two identical countries, one where lockdown was imposed and one where it wasn’t. Nor is it easy to compare similar countries, for the simple reason that every country in the world — bar Comoros in the Indian Ocean — reacted to Covid by introducing at least one non-pharmaceutical intervention (NPI) by the end of March 2020.

A team from Johns Hopkins University has, however, assessed the many (albeit flawed) studies into whether lockdown works — a ‘meta-analysis’. It reviewed 24 studies which attempted to compare countries according to the stringency of their lockdowns, on whether or not they introduced stay-at-home orders, and which looked at individual NPIs — which is possible because some countries, for example, introduced mask mandates and others did not.

The conclusion? That there was no clear link between lockdown stringency and fewer deaths in the spring of 2020, with lockdowns reducing deaths by only 0.2 per cent. The review further concludes that stay-at-home orders reduced deaths by 2.9 per cent. It could find no clear evidence that any individual NPI had a noticeable effect on mortality.

Needless to say, it is possible to be critical of this analysis. The team weeded out a great number of studies which it considered to be inadequate, such as any study which measured actual events against modelled outcomes. On that basis it dismissed an Imperial College paper produced by Dr Seth Flaxman in July 2020 which claimed that lockdowns had already saved 3.1 million lives in Europe, including 470,000 in the UK. The Johns Hopkins team argues that such a method wrongfully assumes that NPIs are the only possible factors — and that voluntary, personal changes in behaviour do not count. The paper also accorded greater weight to studies produced by social scientists than epidemiologists on the somewhat questionable basis that they are the greater experts in the mortality-causing effects of social policy.

But it is a reminder that there are some very wide variations in academic findings to work out the effect of lockdowns, and so as to inform government policy. To underline how difficult it is, just imagine if Boris Johnson’s government had ordered a fourth lockdown just before Christmas in reaction to the Omicron variant — something which we now know very nearly did happen. Infection rates peaked soon after, with cases, hospitalisations and deaths at far lower levels than in the scenarios presented by Sage. The lockdown would inevitably have been credited with saving many lives, and we would never have been any the wiser. But the lockdown never happened, and the modelling was revealed to be faulty. What would have happened in spring 2020 had the government resisted calls for lockdown? We will never know, but the Johns Hopkins study at least presents a case that the outcome may not have been all that much different from what did actually happen.