In April 2017, anti-vaccine groups seemed to have finally gotten what amounted to the Holy Grail for their cause: an allegedly large-scale, peer-reviewed study showing the links between vaccines and autism among a large population of children.
Vaccine skeptic groups, who reject the wide body of scientific literature refuting that link between vaccines and autism, have long sought such a study, but they’ve been hampered by practical concerns, most notably the ethical implications of withholding vaccines from a large group of children.
Released to heavy promotional fanfare on anti-vaccine websites and social media, a 24 April 2017 study published by the Journal of Translational Science claimed to be that Grail. The study (titled “Pilot Comparative Study on the Health of Vaccinated and Unvaccinated 6- to 12-Year Old U.S. Children”) neatly solved the problem of withholding vaccines by surveying parents who had already chosen not to vaccinate their children.
Using an online survey of 415 mothers of homeschooled children, the study concluded that vaccines can increase the risk of neurological developmental disorders, particularly in cases of preterm birth.
The anti-vaccine website Age of Autism, which also helped raise money for the study, reported its findings in glowing terms:
As parents have long expected, the rate of autism is significantly higher in the vaccinated group, a finding that could shake vaccine safety claims just as the first president who has ever stated a belief in a link between vaccines and autism has taken office.
The only problem? The paper is a identical version of a paper briefly published in Frontiers in Public Health in 2016 before being disowned by the publisher. This Translational Science version, as well, was pulled from the website with some reports that it had been retracted as well. As of 18 May 2017, however, the study reappeared on that journal’s website with no public comment for why it was removed or returned.
As we will describe below, these de facto retractions and high level of scrutiny stem not from a conspiracy to silence work critical of the medical establishment, but from the myriad ethical, methodological, and quantitative problems inherent in the study and to the research group behind it.
In short, the study’s design was flawed from the start.
Problems With The Study’s Statistical Analyses
The authors then dressed up their flawed, potentially biased data set in the cheap Halloween costume of a statistically responsible study. No amount of statistics, however, will get around the fact that there were not enough people in the study to address the questions they claim to have investigated.
Only 47 children in the 666-person study had what the authors defined as a neurological developmental disorders (also called an NDD), an already broadly-defined category that includes multiple conditions. Despite this, viral blog posts reporting on this study repeatedly state that autism risk, specifically, was higher in the vaccinated population. What those headlines fail to convey is that such a claim rests entirely on the nine people in the study who were actually diagnosed with autism.
The grossly insufficient sample size is most apparent in the lack of precision in the final results. The authors use something called an “odds ratio” to present their analysis — a measure that compares the relative odds of two populations with differing medical histories (i.e. vaccinated versus unvaccinated) developing a certain medical condition (i.e. NDDs). When scientists use odds ratios, they give a measure of how precise the odds are — a range of uncertainty that can be thought of as, essentially, as a margin of error.
In the Mawson study, they claim (for example) that a preterm birth combined with a vaccination caused a 6.6-fold increase in the odds of a neurological condition or learning disability. This result, a remarkably large odds ratio, comes with a laughable amount of precision — a range from 2.8 to 15.5. That wide a range of possibility is a big red flag, and is representative of many of the associations documented in the study.
The reason there is a range of possible outcomes is because odds ratios are generally pushed through statistical models meant to untangle the competing influences of other factors in their data, isolating the specific effect of one risk factor on an outcome. This is a crucial step for Mawson et al, because a significant number of children included in the NDD pool were also born prematurely, which multiple studies have already robustly linked to NDDs.
The competing influence of preterm birth on NDDs is arguably the largest methodological hurdle for the researchers to get around. The authors needed to make crystal clear that the association between NDDs and vaccination holds without the influence of preterm birth, or else the study is meaningless. They claim to have done so, but provide next to no documentation on the specifics of that process.
Without explaining what they did exactly, they simply write that they used two statistical models to adjust for preterm birth and that the “second adjustment” showed that there was a) no relationship between preterm birth and NDDs and b) there remained a relationship between vaccination and NDDs.
That is quite a statement. The researchers’ claim that there is no association between preterm birth and NDDs flies in the face of long-established science linking the two. Anytime research that contradicts long-understood and scientifically documented relationships should be read with a critical eye toward the methods used. But that is almost impossible here, as the only documentation the authors offer of their methods is to say they carried out a variety of regressions using a statistical software program called SAS.
What makes this worse is that the authors did not explicitly offer a hypothesis or any statistical criteria prior to beginning the study. That makes their data vulnerable to confirmation bias, as it allows them to potentially slice and dice data in as many ways as they want and plug those data into a program to see if their desired results that pops out. This study’s small sample size undoubtedly compounds this possibility, as it increases the likelihood for chance associations that may be flashy, but are ultimately not reproducible.