Jonny Braden | Opinion
If you live in a democracy, you should be safe to assume that, before your government takes a course of action which will negatively affect every single citizen directly, for months on end and indirectly for perhaps a lifetime, that the data used to drive this decision would be thoroughly scrutinised.
Unfortunately, in the United Kingdom, this has not been the case.
A world-leading software engineer has recently released a scathing review of Neil Ferguson’s (and Imperial College London’s) model, which was presented to Boris Johnson as justification for locking down an entire country, risking destroying our economy and unleashing a generational misery upon us.
The report completely eviscerates any credibility that Neil Ferguson had left after his earlier late-night romp. But before we get into the nitty-gritty, first a little background about its author.
Sue Denim (obviously not the author’s real name) has been writing software for 30 years, working at the world’s most successful tech companies and playing a key role in the development of their most complex products. So, I think it’s fair to say we can take Sue’s word as that of an actual expert!
Given that I am but a simple contributor, and not a world-leading software engineer, I will only attempt to break down some of the most shocking revelations.
- The code is terrifyingly basic
The first and, perhaps, most important criticism of the code is the fact that it is overwhelmingly simple – the code used amounted to just 15,000 lines. While this may seem like a lot, for a point of reference, Microsoft Windows 95 operating
system had over 6 million lines of code. Some day-to-day apps that you use on your phone will likely have more than double the number of lines of code that Neil Ferguson’s model used. It should shock all of us to our core that such a gargantuan decision was made not with a supercomputer, but instead relying upon the equivalent of an app that an experienced developer could have designed and built over the course of a few weekends.
2. The code doesn’t work
One of the most basic principles of modeling is that you need to be able to produce consistent results, before you can start making predictions. For example, if I wanted to model simple mathematical equations, I would need a base model that worked as follows:
|1 + 1||2|
|1 + 1||2|
|1 + 1||2|
In this scenario, we have consistent results and therefore can begin the process of predicting results by introducing variable inputs. In the case of Neil Ferguson’s algorithm, their input would produce different results even when the input was exactly the same, and essentially looked like this:
|1 + 1||2|
|1 + 1||9|
|1 + 1||37|
There are clearly some fundamental problems with how this algorithm has been designed. Surely Neil Ferguson or his team should have fixed them?
3. Neil Ferguson knew the code was broken and tried to cover it up
Neil Ferguson and his team at Imperial College London were completely aware of the inherent flaws with their code, yet did nothing to fix these. Concerns were raised by a team at Edinburgh University after they had simply tried to save the file in a different format, and found that this resulted in the algorithm predicting an additional 80,000 deaths! Instead of addressing the problems at the heart of the code (which would have required some hard graft), Neil and his team simply ran the fundamentally flawed models numerous times and took an average. In any other job, Neil Ferguson would have been struck off for professional negligence, and perhaps even prosecuted for fraud. Somehow, though, through the murky world and connections of academia, this man ended up advising our Government to take its most drastic course of action since the surrender of Hong Kong.
Our new national hero, Sue Denim, delivered a final eulogy to Ferguson’s credibility, stating in her report:
“All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.”
It was somewhat entertaining to watch Twitter erupt with righteous indignation when Professor Ferguson’s sexually-related lockdown indiscretions were exposed. The embarrassment felt by a man caught with his pants down (literally) in front of an entire nation was justice enough to balance out his misdeeds. In the wake of this takedown by Sue, though, I feel as though the mood of the nation may be entirely less forgiving.
The question that is being raised here is not surrounding the necessity of the lockdown. It has clearly saved lives and, in and of itself, on the balance of probabilities, has probably been the driving factor in stopping the NHS from being overwhelmed. The major question that this report brings to the forefront of our minds, though, is how such a deeply flawed algorithm, produced by a man of questionable character, has been accepted prima facie without seemingly any proper scrutiny by our Government. It has instead been used as a justification to impose an economically stifling and potentially disastrous lockdown.
We are a country that has been held up as a beacon of good governance for centuries. When this crisis is over, there will be many questions to answer, but figuring out how such an amateur modelling method may have sealed the fate of a whole generation of Britons must surely take priority.