Skip links

Covid-19 in 2021 (Part II): Assessing ‘The Science’

(This is Part Two of a three-part series. Click here for Part One and here for Part Three).

One thing that most of us can agree on in these polarized times is that decisions should be made on the basis of scientific evidence rather than superstition or preferences of multinational corporations. And thus, ‘follow the science’ is the cry. But what does ‘the science’ say and why aren’t the public encouraged to consider this?

Below is a brief overview of the papers that changed the world…

1. June 2020 – masks, WHO study – “masks reduce spread by 80%”

it was no surprise that, after the US Surgeon General, UK doctors and (most famously) Tony Fauci made it clear that there was no possible way that facemasks could provide protection against COVID, the public would be suspicious of the same characters when they then told them that ‘the science’ now said that facemasks worked. So suspicious that many even asked what extraordinary evidence had been generated that would call for such an extraordinary U-turn. They found that such evidence came from lab models that involved spraying aerosols through fabrics.

So enter the all-powerful meta-study commissioned by the WHO. The headlines alone made an impressive case: 29 studies were included, such trials tracked human outcomes and the masks demonstrated a huge 80% drop in transmission.

The only problem was the contents. Of the 29 studies included, only four related to the SARS-Cov2 virus. Of these four, one provided no results related to the use of facemasks. One related to the use of N95 respirators in healthcare settings (and thus tells us nothing about the effect of masks in community settings).

Of the remaining two SARS-Cov2 studies, both were misrepresented; the meta-study reported that Wang et al’s results that there were 120 infections recorded in the study population, and that such infections were recorded in only 1 of 1286 healthcare workers when wearing a mask, compared to 119 of 4036 who were not wearing a mask. However, this is simply not accurate; the original study is very clear. It directly states that there were indeed 120 infections, but that 94 of these occurred in those wearing masks and only 25 occurred in those without. The 1 of 1286 figure is reported in Table 2, but is given in relation to those using N95 respirators (described in this paper as ‘Level 2 protection’). So, with it very clear that the majority of infections were found in those wearing masks, the meta-study researchers simply ignored this and ‘transported’ the figures from respirator use vs non-respirator use. This is reference #70 in the meta-study; even those intimidated by scientific papers may wish to read these figures, as this is clearly not complex.

The researchers were highly selective in reporting the findings of the Heinzerling et al, which was reference #44 in the meta-study. The meta-study claimed that 0/31 (0%) of healthcare workers wearing a mask became infected, compared to 3/6 (50%) that did not wear a mask. This is an amalgamation of several different figures found within Table 2 at the bottom of the study does not reflect the findings. Like many small-scale studies, there is no possible conclusion that can be made. This is because the findings simultaneously found that always wearing a mask was associated with protection, but so too was never wearing a mask.

For those who are interested in the specifics, here they are. Table 2 reported 37 outcomes (as either ‘got infected’ or ‘did not get infected’). 3/37 were infected, 34/37 were not. Of these three, two (66%) used a mask sometimes one never used a mask (33%). Of the 34 that did not get infected, 14/34 (45%) never used a mask. This obviously creates an association between never using a mask and better outcomes. But it’s not that simple, because a) the sample size is so tiny, so no-one can really make conclusions anyway and b) none of the 3 infected workers reported ‘always’ using a mask, versus 7/34 of the non-infected workers (23%). Which would create a conflicting association between mask use and reduced infection. In short, there is no possible conclusion that can be made here. However, there are interesting patterns related to time of exposure (time spent with infected patients) that deserve further investigation.

So we have a meta-study that reports that “masks reduce transmission of COVID-19 by 80%” despite it featuring zero studies that show masks reducing transmission of COVID-19. And yes, this is ‘the science’ that was used to justify mandates across a large number of countries.

Alternative headline – “still zero real-life studies to support the use of masks in preventing COVID-19”.

2. June, 2020: Lyu and Wehby – “Masks reduce spread of COVID”

Following the widespread mocking of the Lancet paper (above), here came a paper that set out to provide some actual evidence that facemasks work and seemed hell-bent on doing so. The researchers weren’t going to let something silly like basic scientific standards get in the way. So what they did was this: they looked into the case numbers registered in the 15 US states that introduced mask mandates. They measured the number of daily cases in the five days before the mandate came in and then tracked this in the 21 days following. And their figures indeed showed an average of 2 percent drop in daily cases by the end of this three-week period. Conclusion: masks work!

How did this compare to the control group? They didn’t measure. How did this compare to the states that didn’t introduce masks? They didn’t measure either. How did they track actual mask usage? They didn’t. How did they correct for the confounding factors (social distancing, shelter-in-place orders, closures of bars and restaurants)? They didn’t.

Perhaps the most blindingly obvious problem with this study – a bit of a theme developing here – is that they took the pre-mask measures at the peak of the first wave and then conducted follow-up measurements when the baseline case level was falling nationally. Follow-up measurements in this study occurred between April 4th and May 22nd:

With just a single check against the bigger picture, we see that these states recorded a 2% drop in cases while the nation as a whole recorded at 20% drop in cases. A more honest approach would have been to ask what was it about these 15 states that meant that they only recorded such a small reduction in case numbers while the nation as a whole saw much more dramatic drop. Could it be that masks had an impact? Directly or indirectly (anxiety, etc)? Could it be that other measures (that tended to come with draconian mandates) also contributed? Maybe the increased worry encouraged in these states saw more people get tested? Could it be that the virus was due to follow the pattern seen in all prior epidemics and fall off naturally? We don’t know. Because the researchers don’t say. They just tell us that masks are effective and that further investigations should be made into the effectiveness of threatening non-mask wearers with jail. Not joking.

Alternative headline: “States that introduced mask (and other) mandates saw a 2% drop in cases while the nation as a whole recorded an 18% drop”

 

3. December 2020 – Original Pfizer study

I have no doubt that this study will go down as required reading for children studying the history of science in decades to come, as it really pushed all boundaries in regards to the abandonment of all scientific principles.

At the time of publishing, discussions raged over how realistic it was to expect vaccines to come to rescue. At this point, scientists had spent 15 years in attempting to fashion a product that successfully provided protection against coronavirus infection. However, the closest that any team had got was generating antibodies in mice. However, these same mice then all developed worrying inflammatory pathology following re-exposure, a phenomenon known as antibody-dependant enhancement (see figure below, with the effect of the same viral exposure in non-vaccinated subjects on the far right).

Accordingly, many were concerned whether this new generation of vaccines would be able to deliver results where the prior attempts had failed. They needn’t have worried about this; Pfizer decided to solve this dilemma by skipping such trials.

And so we arrived in with this initial trial. 43,000 participants were enrolled and tracked over two months. What clinical endpoint was identified to determine if the injections were going to save lives? Well, there was two. One was PCR-confirmed infections, one was COVID-like illness (without any testing). More than 19 out of 20 ‘infections’ that were included in the results were not confirmed by any laboratory measures or clinical endpoints (specifically, there were 170 PCR-confirmed infections, split 8 to 162 between experimental and control groups, whereas the FDA report on the data noted that there were 3410 COVID-like illnesses reported, split 1594 to 1816). No discussion is made as to why these participants were not tested, or why the analysis did not include these figures (although many, such as Peter Doshi in the BMJ, notes that testing/reporting on these subjects would mean that there was now only a 19% drop in COVID-like illness between the two groups, which falls well below the FDA’s stated requirements for emergency licencing, at 50%). No discussion is made as to why they recorded this data and then omitted it from their own published data.

It also did not discuss the number of illnesses recorded in the experimental group versus placebo (409 versus 287) in the seven days following treatment.

There is also a lingering mystery that Pfizer excluded 311 people in the vaccination arm, compared to only 60 in the unvaccinated arm, reporting that they did so for ‘protocol deviations’. No details were provided as to what these deviations were. No discussion was made for this unprecedented excess of exclusions in the experimental group (5x that of the control group).

A lack of transparency has been highlighted by many, including Dr J Bart Classen, who has sifted through the supplementary data embedded in the supplementary data to point out some major safety issues with this (and the other vaccine products). In his August article, he points out that:

  • The Pfizer study shows 262 severe adverse events in the experimental group compared to 172 in the control arm (excess of 152%)
  • In the Johnson & Johnson trial, this was 595 severe adverse events versus 331 in the control arm (excess of 180%)
  • The Moderna trial showed a startling 3,985 severe adverse events versus 943 in the control arm (excess of 423%)

It is important to point out that these aren’t headaches, fever or a sore arm. These relate to FDA-defined Grade 3 or Grade 4 events, which are respectively classified as needing medical attention or potentially life-threatening. Yet the huge discrepancies between the two groups were just ignored.

But despite these (huge) flaws, the biggest one comes in the fact that all these studies worked on a clinical outcome of a relative drop in mild infections. Not serious infections, not hospitalizations, not deaths. It is not a problem when a study on fails to show a single person avoiding hospitalization or death, but it is a problem when governments introduce policies ‘to reduce hospitalization or death’ on the basis of such studies.

Alternative headline: “according to our guesswork, 1 in every 133 can get relief from mild symptoms… just don’t ask us about those side-effects

 

4. January 2021: The Israeli study – “vaccines are 51% effective after one dose”

Arguably the science paper to have had the most impact on our daily lives, this was the first paper to provide results on the impact of vaccination in a population setting. It tracked 503,875 vaccinated individuals who were vaccinated between December 2020 and January 2021, then tracked them for 24 days to see if there was a change in the number of infections.

What they saw was just under 0.2% of the group recording infections in the first 12 days, followed by a slower increase in Days 13 to 24. There were 51% less infections in this second 12-day window compared to the first. And, while it still remains hard to believe, this is where the figure was taken from.

Particularly disappointing is that there was no placebo. So we have no way to determine how much this slowdown in infections was due to the drop in background infections that would have occurred anyway and how much was due to vaccination (Israel started January with a huge outbreak, which reduced dramatically over the course of the study period). A bit of a theme developing here. Even more disappointing is that the authors of this paper chose to use the results from the active intervention in Day 1-12 as the ‘placebo’ for the figures recorded in 13-24 (and even had to the gall to plot the ‘intervention’ against the ‘placebo’ in Figure 1). They also provided none of the raw data in a table.

We have no idea if the number of cases in Days 1-12 was higher or lower than the background change. Because there was no control group. We have no idea if the number of cases in Days 13-24 was higher or lower than the background change (for the same reason). We have no idea what the relationship is between the figures produced on Days 1-12 compared to Days 13-24, other than they were lower. Tragic is the final paragraph of the paper: “we showed approximately 51% effectiveness of BNT162b2 COVID-19 vaccine against PCR-confirmed SARS-CoV-2 infection 13-24 days after immunization with the first dose using the preceding 1-12 days as a reference… Global effort should accelerate COVID-19 vaccines deployment urgently.” Most tragic is the use of this ‘scientific’ paper to do just that.

Alternative headline: “while background cases dropped across the country, so too did cases in our study group”.

5. March 2021: Lopez Bernal et al – “vaccines are 89% effective”

The researchers wanted to determine the effectiveness of the Pfizer and AstraZeneca vaccines in avoiding hospitalization. They tracked the outcomes (positive tests, COVID-associated hospitalizations and deaths) between December 8th and February 19th. While funding does not automatically result in bias, one of the obvious red flags is the fact that the paper is funded by the UK Government, who clearly had some skin in the game (having already encouraged millions to take experimental jabs).

However, it is never surprising to find some major methodological flaws. One such flaw is the refusal to discuss the 48%  spike in cases found in the vaccinated group in the first nine days, other than a single sentence to declare that there is nothing to see here: “vaccinated individuals had a higher odds of testing positive, suggesting that vaccination was being targeted at those at higher risk of disease”. Yep, that was the only mention of this key finding.

They then go on to explain that vaccine effectiveness was not based on the reduction in infections relative to baseline, eg. what was the actual effects of the vaccines. It was instead based on reduction in infections relative to the first nine days (that’s right, when infections were 48% higher than baseline).

Alternative headline: “treatment reduced infections, compared to the time when it pushed infections up by 48%”

 

6. April 2021: The Israeli Study – “Vaccination is 46% effective for infections, 87% for hospitalizations”

This paper was deemed as the follow-up to the January paper coming out of Israel. Unlike the first one, this promised to do what science has always done; do a case-control study, comparing the outcomes of those undergoing the intervention (vaccination) versus those that did not. And indeed the structure of the study looked good; 596,618 people, newly vaccinated between December 20, 2020 and February 1, 2021, then tracked their responses against 596,618 controls. Some of the earliest-vaccinated ended up being tracked for 44 days, some of the latest for just a few, but there were enough numbers to make this a meaningful comparison.

Only it was not. The biggest red flag was the glaring difference between the vaccinated group and those reported as controls, apparently matched for age, gender, ethnicity and risk factors. Indeed, in the Results section, the authors have the demerity to state “All variables were well balanced between the study groups”.

The only thing they were not matched for was COVID infections at baseline. This is important, given it is the very outcome they were measuring. After ‘matching’ the cases and controls, it turns out that there were 359 infections in the ‘matched’ unvaccinated compared to 172 in the group undergoing vaccination. In other words, there was something about the unvaccinated group that increased their risk of infection by 2.09x compared to those that were to be vaccinated. This is hidden by the authors, and is only seen if one clicks on the Supplementary Appendix and then scrolls all the way down to Table S7 (where the raw numbers are provided).

In the main paper itself, we see no discussion of this huge difference between the groups. We only see the adjusted graphs, once that undergo an unexplained adjustment so that it appears that both groups are beginning on zero, and then tracks only accumulated (additional) infections from that day on. A very suspect thing to do in a group that has 2.09x the risk at baseline. In any case, these two ‘adjustments’ means that the authors’ graphs look like this:

When any normal graphing (count the number of infections per day, and show it) would show a different picture:

Or they could have skipped straight to the endpoints, the unvaccinated group recorded an average of 139.6 infections per day during the study period (a 61% drop compared to baseline) while the vaccinated group recorded an average of 101.3 infections per day (a 41% drop). But they didn’t mention this and, likely many papers before and since, neither did they mention the post-first-jab-spike. It’s a shame that this was ignored because there is some worthwhile discussion here: just because there was a bigger drop in cases amongst the unvaccinated, doesn’t automatically mean that the treatment is causing a problem… we need to recognize that the population that they put forward as their ‘matched’ controls were clearly not matched and undoubtedly exposed to higher baseline risks. We don’t know what drove this (Socio-economic status? Access to healthcare? Urban vs countryside? Healthcare worker vs other jobs? We can only guess). But this will undoubtedly have played a big role, and its vital to note that the background number infections was falling throughout this time period. But the one clear takeaway is that there are bigger factors than vaccination and it would be absurd to present these trial results as ‘proof’ of anything, let alone effectiveness of the vaccine.

Alternative headline: “In the 44 days following treatment, unvaccinated see a larger drop in infections than the vaccinated.”

 

7. April 2021: The Manchester Uni Study – “one jab reduces infections by 66%, two jabs by 80%”

Although often referred to as the Manchester Uni paper, this article was authored by a team of contributors from a number of UK universities. It set out to study the effect of vaccination. It tracked a total of 373,402 individuals who provided multiple swab test between 1st December 2020 and 3rd April 2021 (1.6m results in total). It divided them into six time periods (more than 21 days before first vaccination, less than 21 days before first vaccination, 0-7 days following first vaccination, 8-20 days following first vaccination, 21 days or more following first vaccination, and post second vaccination), using the first time period as the baseline. They then added a further group, those who had previously tested positive and were not vaccinated (whose results were averaged across the entire study period).

The first red flag was that they did not track any unvaccinated individuals. If your aim is to determine the effectiveness of a taking something versus not taking something, then you have a clear interest in measuring both groups. Otherwise, how can you hope to make a comparison?

The second big concern is that is repeats the glaring errors of paper gone before it, such as the January paper from Israel (discussed above). Specifically, it made measurements in the middle of an outbreak, then measured them again when the outbreak is waning, then trying to make a conclusion from this. This s why the scientific method hinges on placebo control because, without it, you run the risk of generating useless data. Just like this study, where we saw a 72% drop in infections before a single vaccine was administered.

This massive drop can be seen very clearly in the supplementary data (table 7), yet doesn’t get a mention in the main paper. This table lists the adjusted rate of positive tests – the figures that form their headline conclusions – and it is particularly noticeable that these gains are actually reversed to only a 55% drop versus baseline (8-20 days following vaccination), improves slightly to a 66% drop (21 or more days after first vaccination) and then reaching an 80% drop after the second injection.

Does this tell us that the intervention makes things worse and then achieves a minor improvement? Difficult to say, because of the substantial drop in background infections in the first few weeks of this study period. All it shows us is that you get worthless data when you do not take into account confounding factors (such as a massive drop in infection rates across the country during the study period) and try to make comparisons without use of a control group. Which we knew before.

Alternative headline: “doing nothing reduced infections by 72%.”

8. April 2021: Thompson et al – “Vaccines are 90% effective”

3,950 participants in total, 2,479 received their second dose during the study period (December 2020 to March 2021) and 477 received just one dose. As you may expect, they then tracked the total number of positive tests and compared them to the total number of days spent unvaccinated, partially vaccinated and fully vaccinated.

The researchers reported a 90% drop in cases reported for those vaccinated. However, the study was marred by an ‘oversight’ that saw many question the objectivity of the authors, which was to exclude all positive tests that occurred within 13 days of a vaccine. It’s a legitimate consideration that medical treatments may take time to reach effectiveness but, at the time of publishing, the phenomenon of the ‘post-jab-spike’ in infections was already well known (and was a discussion point from the original Pfizer study from November 2020 and also in a Danish study published in early March). The study showed 44 positive results in those partially or fully vaccinated, but the authors simply removed the 33 results that occurred in this window. This naturally has a big impact on the results, upping the rate-per-thousand days to 0.36 rather than the 0.04 figure they choose to focus on. Quite a big difference.

Skeptics may not be surprised to see one of the authors declare a financial relationship with Pfizer. This does not mean we should automatically reject the conclusions but it makes sense that, when there is a clear incentive for bias and obvious signs of bias in methodology, then we should question if the study is subject to bias. That being said, these results emphatically indicate a positive effect on reducing COVID infection in the pre-Delta strains.

Alternative headline – “vaccine looks decent in pre-Delta strains, but looks great when you remove its negative effects”

  • July 2021: Thompson et al – “Vaccines are 91% effective”

This is essentially the same study, with the same flaws, but just continued for an additional month and turned into ‘yet another paper that shows the benefits of vaccination’. Despite tracking vaccinated individuals for an average of just 69 days, it was heralded as further proof for the ‘long-term benefits’ of vaccinated. Hasn’t aged too well…

One thing they added in this paper, versus the April version, was to take measurements from throat swabs. They reported that there were reduced viral loads in the vaccinated populations (a conclusion that formed the basis of the CDC policy, a policy that remained unchanged even when the CDC acknowledged that this simply was not the case). Many have questioned the value of such a conclusion, a conclusion made in a CDC-funded paper that just-so-happened to provide the conclusions necessary to justify a controversial CDC policy. However, while it is sensible to consider the convenience of these findings and the fact that these have not been validated by further research, the discrepancy between the findings here and all papers since may actually be explained by different responses of the vaccine against older strains. We don’t actually know. But we do know that there is no scientific rational for the CDC to run their vaccine policy on the basis of findings in this paper, given that its findings have since been invalidated.

Alternative headline: “reporting the same investigation for a second time in a second journal gave us similar conclusions

10. August 2021: Fowlkes et al – “Vaccine effectiveness was 91% before Delta hit, and 66% since”

In this paper, researchers tracked 4,217 medical staff from eight hospitals across the US that were not thought to have been infected prior to the study. They then tracked them for eight months, not only measuring the number of infections that occur, but when it occurred and the vaccination status of the worker at that point in time. Recognizing that most workers were vaccinated during the study, they counted up the total number of days that workers spent vaccinated and unvaccinated. Then measured how many positive tests occurred per day spent vaccinated or unvaccinated. Sensible so far.

However, like almost all studies conducted in the Land of the Free, the study falls down on what it actually measures. The methodology acknowledges that it followed their prior procedures: “respiratory specimens are analyzed utilizing the CDC-designated reference laboratory for real-time reverse transcription–polymerase chain reaction (rRT-PCR) assay testing”. And herein lies the problem: the CDC changed the way that they report ‘breakthrough’ cases from May 1st, which means that ‘breakthrough’ cases are under-reported by at least 13x. Therefore, reporting on the cases recorded by the CDC is materially different to reporting on real-world cases; this paper is actually studying the administrative practices of the CDC more than it is studying medical outcomes.

Interestingly enough, the CDC has only done one study on medical outcomes in humans, the so-called Masachusetts study (covered in Part II), which found vaccinated individuals were over-represented in cases and especially in hospitalizations (4 to 1). It was released three weeks before this one but, for reasons we can only guess, never made it into any of their press conference. Maybe it was an admin error.

On the subject of admin errors, this paper is yet another than excluded figures for the first 14 days following vaccination (the period where we know infections are most likely to occur).

Alternative headline: “CDC files massively under-report cases and hospitalizations in the vaccinated compared to real life data, plus there may be some reduced effectiveness against Delta variant”.

11. August 2021: The Kentucky Study – “Vaccine is 2.34x more effective than natural infection”

The researchers scanned the CDC database for individuals living in Kentucky who a) who were registered as having a positive COVID test in 2020 and then b) tested positive again in May/June 2021. They then selected 496 controls that they believed were matched, with the requirement that they too were registered as having a positive test during 2020 but had not tested positive through June 2021.

The authors noted that there was no requirement for any actual physical signs of infection for participants to be considered reinfected (only PCR), and that these tests made no attempts to identify viral lineage; that is to say, no attempts to check the positive tests related to a reinfection or simply prolonged viral shedding. However, these issues are small fry compared to the monster flaws here.

One is in line with the previous paper, in that the study was not measuring medical outcomes in humans but administrative outcomes; that is to say, how many times the CDC choose to record positive tests into their database (as mentioned above, this understates the number of positive cases in the vaccinated, aka ‘breakthrough infections’, by at least 13-fold). Further distortions are made by the fact that, if someone becomes infected within 14 days of having a vaccine, this is classified as occurring in the unvaccinated. The other is that the researchers did not count how many tests were performed by the vaccinated group or the post-infection group (which is particularly bizarre, during a study period where the CDC were actively discouraging vaccinated individuals from routine testing).

Alternative headline: “the CDC recorded less positives for the group who did less tests and who had the majority of their positive tests ignored.”

12. September 2021: Veterans Study – “86% effectiveness at reducing hospitalization”

If you wanted to check the effectiveness of a vaccine against hospitalization, what would you do? Would you take 1,000 individuals who’d been vaccinated, 1,000 individuals that are matched for gender, age, smoking status, exposure risks, then track how many of each ended up in hospital over the next few months? Me too.

The researchers on this paper had other ideas. They decided that the best way to work out vaccine effectiveness against COVID would be to go through hospital records over six months and count the total number of veterans hospitalized for chest complaints and then compare how many people had COVID and how many people had flu/pneumonia. Seriously. They just counted how many veterans ended up in hospital (across five selected sites) for chest-related complaints that had been declared ‘COVID positive’ or ‘COVID negative’ on the basis of the PCR test. They then based their conclusion on the fact that those declared to have flu/pneumonia had a higher COVID vaccination rate compared to those declared to have COVID (48% vs 13.9%). The idea being that the COVID group had a lower percentage of vaccinated individuals, so therefore the vaccines must be working. Seriously.

They did not discuss the bizarre choice of methodology, nor the over-difference testing procedures between vaccinated and unvaccinated in US hospitals from May 1 (which was just over halfway through the study period).

Alternative headline: “Some hospitalized veterans with chest infections have COVID, some have flu, some researchers have no idea what they’re doing

13. Sep 2021 – “Vaccinated shed for a shorter time period”

This study, reported in September 2021, measured the viral load over time in individuals who had undergone vaccination and had then tested positive for COVID-19, and reported that vaccination reduced the length of time that such individuals shed (compared to the unvaccinated).

Are there problems with this conclusion? Yes. The first one relates to the small sample size: this study included only six vaccinated participants. And did not include any unvaccinated (which is a pretty big flaw, for a paper that purports to compare the responses of the vaccinated against this group). How did they claim to make such a comparison? By estimate. They took the average results from previous investigation undertaken in unvaccinated group (for which samples were taken in late 2020 and early 2021).

This certainly fits the bill for a ‘pilot study’ and certainly calls for further investigations. Perhaps such research should actually include both the unvaccinated in such a comparison? A radical approach, no doubt…

Alternative headline: “we’ve measured viral shedding in six vaccinated people, it would be a good idea to run a comparison against the unvaccinated

Sep 2021 – Scottish Study – “data supports third dose for the vulnerable”

(A quick warning on this write up; whereas all the previous papers discussed have been rather simple in their flaws, this gets into the use of medical statistics. Tedious, yes, but still a key part of the wider discussion).

Warnings aside, this paper assessed 5,168 people with a ‘severe’ COVID diagnosis made in Scotland between 1 December 2020 to 19 August 2021. Then it compared them to 45,767 matched controls to see if there were any differences between those that were getting COVID and those that were not. This is clearly a very useful thing to do, as it allows researchers (and readers) to quickly spot risk factors that call for further investigation. And, as you might imagine, there were several differences, which they laid out in Table 1:

  • Those diagnosed with COVID were more likely to have pre-existing medical conditions that put them at risk (73% versus 44%, in other words the at-risk group were 1.66x over-represented)
  • They were also more likely to have been in hospital lately (28% versus 2%)
  • They were also more likely to be in a care home (28% versus 5%)

These were the differences that stood out. 84% of those with a COVID diagnosis were unvaccinated versus 81% of those who were not diagnosed in this time period (such numbers reflect the fact that the vaccination program only gathered pace in around May, hence why both the with-COVID and without-COVID group were both largely unvaccinated). It’s worth noting that, like many of their peers, the authors classified the recently-vaccinated as unvaccinated, which may account for a ~10% swing.

Nonetheless, there are lot of other interesting tabulations at the bottom of the study (in the supplementary data). In total hospitalizations, there was a similar pattern; of those hospitalized, 60% had risk factors (versus 35%) and were more likely to have been in hospital in the lead-up (26% versus 2%) and more likely to be in a care home (11% versus 3%). It was essentially the same patterns, just not as pronounced. The unvaccinated made up 82% of the hospitalized versus 77% of their matched non-hospitalized peers (a difference of 6%).

All interesting stuff, and useful discussion points on something we are all discussing. But how did the researchers determine the reported effects of the vaccines in these sub-groups? If you thought it was by comparing how many people in each group were vaccinated/unvaccinated and then tracking the percentage that got severe COVID, you’d be wrong. They instead assessed only the vaccinated groups (listed in Table 2) and then assessed differences in this group relative to the overall population. To do this, they performed a ‘Poisson regression model’. One that ‘integrated various risk categories (care home residence, number of adults in household, number of drug classes and recent hospital stay) as covariates’ and was then ‘fitted to the cohort formatted with one observation per 28-day person-time interval’. In case you were worried as to whether how the baseline hazard rate was modelled, they kindly confirm that ‘the baseline hazard rate was modelled as a natural spline function of calendar time with 6 degrees of freedom’. This model revealed that, in the group that were subject were not subject to risk factors, vaccine was 93% effective in avoiding severe COVID! They also found those with major risk factors had much less effect (only 66%).

The first logical question is, if there was only a 6% difference in relative risk between the two populations (ie. there was little difference overall), then there would need to be a sub-group that showed a higher risk following vaccination to balance out this group that showed such a strong drop. This is simple maths, but they did offer any comment on any groups showed increased risk following vaccination; this stands out as the first logical question.

It’s also a question that would allow us to determine if there are sub-groups that should be targeted by vaccination (and those that should not). Unfortunately, a breakdown of different outcomes in the various sub-groups is one bit of data not reported. Consequently, big questions remain as to the use of a Poisson model, a tool that depends on covariates being unrelated, when 3/4 of these were clearly related (and thus likely to magnify the effect of an intervention).

So here’s a disclaimer: I’m not a statistician. If you are, please comment. That being said, it doesn’t take a statistician to recognize that those in care home are likely to take more drugs and have recently visited the hospital, and vice versa. Neither does it take a statistician to point out that, given that the researchers already have access to the exact numbers of how many vaccinated/unvaccinated people in each sub-group contracted severe COVID, why do they need to run such complex equations to estimate it? Further criticisms are levelled on the way this paper, like those that go before it, treat infections in the recently-vaccinated as happening in the unvaccinated population, which can severely distort comparisons.

In summary, a paper that has potential to inform our discussions, especially in regards to options in treating or shielding the more vulnerable, but falls well short through unnecessary use of esoteric statistical wizardry that brings more questions than answers.

Alternative headline: “it probably worth talking about different effects between those with comorbities, but this will have to be based off our models and risk ratios than the actual numbers”

15. October 2021 – “we were right to over-ride human rights, we just should have done it earlier”

This report sees the government’s inner circle and approved experts brought in to determine if the government’s inner circle and approved experts were right to bring in the most draconian policy ever seen in peacetime Britain. What could possibly go wrong?

Let’s start with the two biggest assumptions of the report:

  • That lockdown had a positive effect
  • That infections would have continued to rise exponentially

The idea that lockdown would have a positive effect was ridiculed at the time. It has been shown repeatedly since to have had no merit, a finding replicated in over 30 studies, and even saw the WHO making pleas to end such policies. As was noted by many (both at the time and since), a feature of all pandemic is that infection rate “always starts falling long before the herd immunity threshold is reached with or without a lockdown”. Real world evidence of this pattern was shown by the observing patterns in countries that decided against lockdown (such as Japan, South Korea, Ukraine, Taiwan and Sweden (plotted below), and even acknowledged by Chris Whitty himself in  July 2020, where he admitted that the R number had dropped ‘below one well before, or to some extent before, March 23rd’.  

The second assumption made to generate numbers in this report, that infections would have increased exponentially forever, warrant little discussion. Because this is simply not possible.

You’d think that this may be the defining characteristic of this paper, yet it’s trumped by the stunning self-denial showed elsewhere in the report, where they simply did not even consider the case against lockdown. Instead, the paper reframes reality whereby they make the claim that there was a consensus on the need for lockdown, despite such policies resulting in over 60,000 doctors and scientists coming out to publicly opposed these (the biggest medical rebellion in human history).

Alternative headline: “our only mistake was not being draconian enough… providing that we ignore all critics and all real-world evidence and base conclusions purely on our prior assumption.”

Summary

This is by no means a complete breakdown of all the available science on SARS-Cov2/COVID-19 and neither is it meant to be (at the time of publication, there were 163,100 papers published on this topic). It therefore focuses on the 15 papers that have had the biggest impact on public policy.

Neither is it designed to deliver a black-and-white summary of what is clearly a complex issue. However, I hope it serves as a starting point for an overdue discussion of ‘the science’ that adheres to scientific principles. That is to say, one that focuses less on what the authors write in their conclusion and more on what the data tells us.

Such a discussion would naturally be incomplete without considering the points raised by the 21 papers examined in Part III

Leave a comment

This website uses cookies to improve your web experience.