A new study of e-cigarettes’ efficacy in smoking cessation has not only pitted some of vaping’s most outspoken scientific supporters against one of its fiercest academic critics, but also illustrates most of the pitfalls facing researchers on the topic and those – including policy-makers – who must interpret their work.
The furore has erupted spanning a paper published in The Lancet Respiratory Medicine and co-authored by Stanton Glantz, director from the Center for Tobacco Control Research and Education in the University of California, San Francisco, along with a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is actually named as first author but does not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to compare the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: in other words, to find out whether utilization of e-cigs is correlated with success in quitting, which might well mean that vaping can help you quit smoking. To achieve this they performed a meta-analysis of 20 previously published papers. That is certainly, they didn’t conduct any new information right on actual smokers or vapers, but instead attempted to blend the outcomes of existing studies to determine if they converge over a likely answer. This can be a common and well-accepted strategy to extracting truth from statistics in numerous fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online along with by the university, is the fact that vapers are 28% less likely to prevent smoking than non-vapers – a conclusion which may claim that vaping is not just ineffective in quitting smoking, but actually counterproductive.
The end result has, predictably, been uproar through the supporters of E-Cigs within the scientific and public health community, particularly in Britain. One of the gravest charges are the ones levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and also by Carl V. Phillips, scientific director in the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) inside the United states, who wrote “it is apparent that Glantz was misinterpreting the information willfully, rather than accidentally”.
Robert West, another British psychologist and the director of tobacco studies with a centre run by University College London, said “publication of the study represents an important failure in the peer review system in this particular journal”. Linda Bauld, professor of health policy on the University of Stirling, suggested the “conclusions are tentative and often incorrect”. Ann McNeill, professor of tobacco addiction inside the National Addiction Centre at King’s College London, said “this review is not scientific” and added that “the information included about two studies that I co-authored is either inaccurate or misleading”.
But what, precisely, are the problems these eminent critics discover in the Kalkhoran/Glantz paper? To reply to a few of that question, it’s necessary to go underneath the sensational 28%, and look at that which was studied and how.
Meta-analysis is a seductive idea. If (say) you have 100 separate studies, all of 1000 individuals, why not combine those to create – essentially – one particular study of 100,000 people, the final results from which needs to be much less susceptible to any distortions which may have crept into an individual investigation?
(This might happen, for instance, by inadvertently selecting participants using a greater or lesser propensity to stop smoking because of some factor not considered from the researchers – a case of “selection bias”.)
Obviously, the statistical side of the meta-analysis is quite more sophisticated than simply averaging out the totals, but that’s the overall concept. And also from that simplistic outline, it’s immediately apparent where problems can arise.
Whether its results are to be meaningful, the meta-analysis has to somehow take account of variations in the design of the individual studies (they may define “smoking cessation” differently, as an example). If this ignores those variations, and attempts to shoehorn all results in to a model that some of them don’t fit, it’s introducing their own distortions.
Moreover, in the event the studies it’s based on are inherently flawed by any means, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This is a charge produced by the Truth Initiative, a U.S. anti-smoking nonprofit which normally takes an unwelcoming view of e-cigarettes, in regards to a previous Glantz meta-analysis which will come to similar conclusions towards the Kalkhoran/Glantz study.
In a submission a year ago to the Usa Food and Drug Administration (FDA), responding to that federal agency’s demand comments on its proposed electronic cigarette regulation, the facts Initiative noted it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of those happen to be included in a meta-analysis [Glantz’s] that states show that smokers who use e-cigarettes are more unlikely to give up smoking compared to those that usually do not. This meta- analysis simply lumps together the errors of inference from these correlations.”
Additionally, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate as well as the findings of these meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and be prepared to receive an apple pie.
Such doubts about meta-analyses are far from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the facts Initiative’s points as he wrote in The Lancet Respiratory Medicine – exactly the same journal that published this year’s Kalkhoran/Glantz work – the studies contained in their meta-analysis were “mostly observational, often with no control group, with tobacco use status assessed in widely disparate ways” though he added that “this is not any fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies simply do not exist yet”.
So a meta-analysis could only be just like the research it aggregates, and drawing conclusions from this is simply valid in the event the studies it’s based on are constructed in similar methods to the other person – or, a minimum of, if any differences are carefully compensated for. Of course, such drawbacks also affect meta-analyses which can be favourable to e-cigarettes, such as the famous Cochrane Review from late 2014.
Other criticisms of the Kalkhoran/Glantz work go beyond the drawbacks of meta-analyses generally, while focusing on the specific questions caused from the San Francisco researchers as well as the ways they attempted to respond to them.
One frequently-expressed concern has been that Kalkhoran and Glantz were studying the wrong people, skewing their analysis by not accurately reflecting the actual variety of e-cig-assisted quitters.
As CASAA’s Phillips highlights, the e-cigarette users in the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes when the studies on their own quit attempts started. Thus, the analysis by its nature excluded those that had started vaping and quickly abandoned smoking; if these people happens to large numbers, counting them might have made e-cigarettes seem an infinitely more successful route to smoking cessation.
Another question was raised by Yale’s Bernstein, who observed that does not all vapers who smoke are attempting to give up combustibles. Naturally, people who aren’t trying to quit won’t quit, and Bernstein observed that when these folks kndnkt excluded through the data, it suggested “no effect of e-cigarettes, not really that e-cigarette users were less likely to quit”.
Excluding some who did find a way to quit – then including individuals who have no aim of quitting anyway – would likely seem to affect the result of a study purporting to measure successful quit attempts, despite the fact that Kalkhoran and Glantz debate that their “conclusion was insensitive to a variety of study design factors, including whether or not the study population consisted only of smokers considering quitting smoking, or all smokers”.
But additionally there is a further slightly cloudy area which affects much science – not simply meta-analyses, and not merely these particular researchers’ work – and, importantly, is often overlooked in media reporting, as well as by institutions’ public relations departments.