Toxicology is the science of determining the “health/harm” limits of exposure to things. They say “the dose makes the poison” — meaning that anything, in the wrong dose, could harm you. Even something as innocuous as distilled water could kill you (via hyponatremia and subsequent intracranial edema) if you drank 3 gallons of it.
Because people differ, performing toxicology requires that you do not accept the numbers that you get from research, but that, instead, you go beyond them:
When applying toxicology to the Phase 3 clinical trial data for COVID shots, the indication is that they should have never been given out. In doing so, we can look at the numbers of harmful instances, and then go beyond them to estimate the population risk from COVID shots.
Fraiman et. al. looked at the published Phase 3 trial data for COVID shots for a subset of adverse events (AE) which are serious — i.e., serious AEs require hospitalization at a minimum — and they found a combined mRNA rate of one associated hospitalization (one serious AE) for each 1,600 doses given.
But proper toxicology doesn’t take that rate as the last word on the matter. When toxicology is performed properly, adjustments are made using uncertainty factors (UF):
The 5 types of UF at bottom in the image above all each typically run from about 1 up to about 10. When an uncertainty factor is 10, you multiply your estimate of harm by that much to obtain a reference dose predicted to be safe for virtually all groups. For COVID shots, the “interspecies" UF isn’t used, because data already refer to humans.
A LOAEL is a lowest-observed-adverse-effect-level, or the smallest dose which has ever been known to cause harm. It is not the largest dose which has never caused harm (NOAEL), so when relying on a LOAEL to find a safe reference dose, you apply an uncertainty factor to observed data.
Subchronic studies are short-term ones, and to get long-term risk estimated properly, you apply another uncertainty factor. Intraspecies uncertainty covers the variability in response found among all people, requiring its own uncertainty factor. Database deficiency is when, for instance, you don’t have data on reproduction/development.
Down the middle-left of the image below are common default UF values:
Let’s merely assume a conservative value of 2 for each of the 4 UFs which would apply to COVID shots.
UF(L) = 2
UF(S) = 2
UF(H) = 2
UF(D) = 2
Cumulative Uncertainty Factor = 2*2*2*2 = 16
When applied to the Fraiman et al. data, instead of there being 1 hospitalization per 1,600 doses given, the extrapolation makes for 16 hospitalizations per 1,600 doses — or 1 hospitalization per 100 doses (~1% who take the shot get hospitalized with a serious adverse event because of taking the shot).
The “safe bet” then becomes the notion that somewhere around 1% of those receiving COVID shots will risk either death or disability. This means that COVID shots, if effective, might have been useful if COVID inherently carried greater than a 1% risk of death.
But because the risk from COVID infections was never more than twice the risk associated with flu, the risk-benefit analysis would preclude any general use of COVID shots — because they are so much more dangerous than the original disease, itself.
If public health was the top-incentive that was motivating public officials back then, then these COVID shots would have never gone out. The indication is that there is/was some other primary incentive behind the choices they made.
Reference
[12.5 serious AE for each 20,000 single doses] — Fraiman J, Erviti J, Jones M, Greenland S, Whelan P, Kaplan RM, Doshi P. Serious adverse events of special interest following mRNA COVID-19 vaccination in randomized trials in adults. Vaccine. 2022 Sep 22;40(40):5798-5805. doi: 10.1016/j.vaccine.2022.08.036. Epub 2022 Aug 31. PMID: 36055877; PMCID: PMC9428332. https://pubmed.ncbi.nlm.nih.gov/36055877/