For anyone who watches the daily Covid-19 briefings, it is quite clear that too many of our politicians and journalists have little to no understanding of science and mathematics. Out of the 26 ministers attending cabinet, only three have higher-level STEM (science, technology, engineering and mathematics) backgrounds. In parliament, only around 100 MPs have science backgrounds.
Why does this matter? Training in science gives people a different perspective on the world. It makes them more sceptical, more rigorous in their approach and, most importantly, teaches them what science can and cannot answer. Unfortunately, too many of our politicians don’t benefit from this approach – and coronavirus has exposed this problem at the heart of government. Here are seven concepts that ministers seem to struggle with:
1. Testing, testing, testing
No matter how carefully designed, created or performed, no test is perfect. Even if a test is 99.99 per cent accurate, it will generate errors and potentially a false sense of security for those who do not have a grasp of probability and maths. These errors become increasingly important to be aware of as the volume of testing increases.
Let’s say we have a Covid-19 test that is 95 per cent accurate in terms of both sensitivity (the ability of the test to detect people who have a virus) and specificity (the ability of the test to correctly identify that a person does not have a virus).
We gather 100,000 people together to be tested. Imagine we know that 1,000 of the group have the virus. Using our 95 per cent accurate test, out of the 1,000 people who have the virus we would detect 950 of them and miss 50 people who in fact have the virus. However, of the remaining 99,000 people the test would also falsely detect Covid-19 in 4,950 – 1 in 20 – of them.
This means from our group of 100,000, testing would show 5,900 positive results when in fact only 1,000 people actually have the virus (50 of which were missed).
2. Reliance on R
Calculating the rate of transmission, the R number, is not a precise science. It is not like using a thermometer to assess if a person has a fever or not. It is dependent on how you define a case, quality of your data, how effective your surveillance and detection process is, delays in reporting and how accurate your tests are.
As the government has explained, a value of R greater than one indicates that the infection may grow or persist in the population while a value of R less than one indicates that the infection will decline in the population.
The mantra of statisticians around the world is that correlation does not equal causation
Even after a pandemic has passed and all the data is in, there can be competing calculations and opinions on what the ‘true’ R value is. For example, the transmission rate of the 1918 to 1919 Spanish flu outbreak has been placed at anywhere from 1.4 to 2.8. In Canada, four separate academic studies (three in 2010 and one in 2012) have looked into the rate of transmission of the 2009 swine flu pandemic. Their calculations of R vary from 1.12 to 2.50.
The UK government suggests that the current rate of Covid-19 transmission in the UK is between 0.6 and 1.0, so 0.8 ± 0.2 margin of error. Plus or minus 0.2 might not sound a lot but if you signed up to a new job with a salary of £30,000 a year, this margin of error would mean that you could actually end up being paid between £22,500 and £37,500.
To place so much importance on the R value, using it to influence the restrictions placed on work, education, enjoyment and freedoms seems an approach prone to problems and pretends that it can be calculated more accurately than it can. As Ed Humpherson, of the UK Statistics Authority, told the New York Times: ‘Being trustworthy depends not on conveying an aura of infallibility, but on honesty and transparency.’
3. Following ‘the best science’
Boris Johnson and his deputy Dominic Raab have repeatedly spoken of following ‘the science’ or even ‘the best science’. Unfortunately, these statements are complete nonsense; science is almost never black and white.
In normal language ‘the science’ is interpreted as a proven fact, where a hypothesis has been thoroughly tested and verified by experimentation or repeated observation. The earth travels around the sun every 365.256 days and washing your hands with soap and water kills coronavirus are facts.
‘The science’ for the management of a pandemic is actually the opinions of scientific advisers, their models and the assumptions on which they are based. For example, the Imperial model assumes that closing schools and universities increases contact in the community by 25 per cent, that 81 per cent of the UK population would be infected if no counter-measures were taken and that to avoid a second peak, social distancing could be needed for 18 months or more. It’s impossible to know for sure whether these assumptions are accurate or not.
Normally, when scientists want to test whether an intervention has the results they predict, they conduct a randomised controlled trial. Participants are randomly assigned to one of two groups, one group will be subjected to the intervention and the other will be given a placebo. By comparing these two groups, scientists can see if there is anything statistically significant between those who have and have not had the intervention.
Unfortunately, this is impossible for the management of a pandemic. We cannot go back in time and create an identical copy of the UK to see what a different set of interventions – no intervention at all – would make to the spread and impact of Covid-19.
The challenge is that politicians, media and the public crave certainty when there is little.
4. Exponential growth
Covid-19 has not grown exponentially, it has followed the epidemiological curve.
Exponential does not mean ‘fast’ or ‘quick’, it has a specific mathematical meaning. If a number grows at a constant rate over a sustained period of time – for example doubling, tripling or quadrupling every day or week – that is exponential. As time goes on, the increases become larger and larger because doubling a large number results in an even larger number.
In the first few weeks of recording coronavirus infections, you could try to argue cases were doubling every three or four days. But the data at the time would have been very poor, testing limited and sample sizes small. It is fairer to look at the mortality statistics.
If Covid-19 had grown exponentially, say doubling every day, it would mean that between the first recorded UK death from Covid-19 on 7 March and the 23 March when the lockdown was introduced, 65,536 deaths should have been recorded when in reality there were 285 deaths. If you calculate the three-day rolling average growth rate for deaths using Office for National Statistics data, it is clear that there has never been exponential growth in the true sense of the word.
5. The fifth test
The government has set out five tests that have to be met before lockdown can be lifted.
The fifth test is no risk of a second peak or more recently ‘any adjustment to the current measures will not risk a second peak of infections’, either of which is impossible to achieve in reality. You can get to a point of low or minimal risk, but completely ruling out any risk is practically impossible to achieve.
What if there is a large number of undetected cases lurking? Or if Public Health England has made some mistake in their calculations or lost a batch of data? How about if the Covid-19 virus suddenly mutates slightly making it vastly more infectious? What if there never is a vaccine? Even if we do create one what happens if it later turns out to be ineffective? And what number of infections count as a second peak? Five? 50? 500?
6. Modelling results being treated as facts
The numbers spat out by models are not facts. They are a best guess of what may or may not happen based on the assumptions and relationships taken into account by whoever created the model. Real life situations are often complex and nonlinear, which means models are very sensitive to the initial conditions and assumptions people choose.
In 1987, the statistician George Box succinctly and famously said that ‘all models are wrong, but some are useful’. When predicting where a hurricane will land in the US, modellers will produce ‘spaghetti diagrams’ with wildly different possible paths, every inflation report produced by the Bank of England includes a fan chart to try and illustrate the uncertainty involved in predicting how anything will play out in the future. Infection models are no different.
At the worst end of the scale, some models are created to give the results the modeller – or their funder – wants to see. At the milder end of the scale genuine mistakes and small oversights are always made. Given the significant impact models can have on decision making by governments, their assumptions, methods and outputs are normally carefully scrutinised through the peer review system. This is where academic papers are independently reviewed by other experts in the field. This has not happened for the Covid-19 models meaning their forecasts should be treated with even more caution.
So when the Prime Minister spoke about 500,000 deaths being avoided, it is misleading the public. There is absolutely no way to know how many deaths there would have been if a different approach was taken.
7. ‘The lockdown is working’
Some weeks after the UK introduced the lockdown, the number of daily deaths and the number of cases started to fall. Can you say that the lockdown caused this fall or would it have happened anyway? What about in the long run? Will more deaths be caused by people avoiding treatment due to a fear of Covid-19 than the virus itself?
The mantra of statisticians around the world is that correlation does not equal causation. Especially so with limited data.
Author Tyler Vigen looked at the divorce rates in the US state of Maine over a ten year period. He found that per 1,000 people the rate had fallen from around five to just over four, in that same time period margarine consumption fell from 3.5kg to 1.5kg per person. There was a high degree of correlation. However, I don’t think anyone would seriously believe that margarine was somehow impacting matrimonial bliss.
So to say that ‘it is a fact that by adopting [the lockdown] we prevented this country from being engulfed’ is completely wrong. Establishing whether A causes B is a tricky task – it takes time and rigorous research. At this point, it is not certain whether the lockdown is causing more harm than good.
Tom Lees is a theoretical physicist, policy expert and managing director of consultancy firm Bradshaw Advisory
Comments