Response to comment from Egger and McKee entitled ‘Unreliable evidence from problematic risk of bias assessments: Comment on Begh et al., ‘Electronic cigarettes and subsequent cigarette smoking in young people: A systematic review”’
Jamie Hartmann-Boyce, Rachna Begh, Monserrat Conde, Lion Shahab, Sarah E. Jackson, Michael F. Pesko, Jonathan Livingstone-Banks, Dimitra Kale, Nancy A. Rigotti, Thomas Fanshawe, Dylan Kneale, Nicola Lindson
{"title":"Response to comment from Egger and McKee entitled ‘Unreliable evidence from problematic risk of bias assessments: Comment on Begh et al., ‘Electronic cigarettes and subsequent cigarette smoking in young people: A systematic review”’","authors":"Jamie Hartmann-Boyce, Rachna Begh, Monserrat Conde, Lion Shahab, Sarah E. Jackson, Michael F. Pesko, Jonathan Livingstone-Banks, Dimitra Kale, Nancy A. Rigotti, Thomas Fanshawe, Dylan Kneale, Nicola Lindson","doi":"10.1111/add.70166","DOIUrl":null,"url":null,"abstract":"<p>We are pleased that this review [<span>1</span>] is encouraging discussion on this important topic. As a team, we care greatly about scientific discourse and opportunities for methodological improvement. This is why we took such efforts to be transparent in the risk of bias methods used in this article, without which such critiques would not be possible. However, right from the outset we note a logical disconnect in the critique by Egger and McKee [<span>2</span>]. The title of their letter starts with ‘unreliable evidence’, yet the authors appear to focus solely on our risk of bias tool rather than on providing a substantive critique of the actual findings from the reviewed studies or our synthesis of them. Regardless, we welcome the conversation.</p><p>In accordance with best practice, our methods were pre-registered and approved through a Cochrane peer review process before our review began [<span>3</span>]. Of our many methodological choices, Egger and McKee focus exclusively on our risk of bias assessment. They rightly acknowledge that there is no consensus among researchers as to how risk of bias assessment should be conducted for ecological studies of the type we include. We, therefore, considered this carefully and developed a risk of bias assessment tool for evaluating population-level studies, which we pre-registered in detail on Open Science Framework (https://osf.io/svgud).</p><p>While some co-authors of included studies participated in developing the risk of bias assessment tool, we consider this a strength and this was appropriately disclosed at the time. Having people who understand the study types included in a systematic review is extremely helpful as it can ensure results are understood and interpreted correctly. Further, as per Cochrane methods, these individuals were not involved in data extraction or making risk of bias judgements for their own studies.</p><p>ROBINS-E, which arguably may be the most appropriate risk of bias assessment tool currently available, was not yet approved for use when we conducted our review. We acknowledge this as a limitation in our review, but note that even had we used a different tool, our findings would remain largely unchanged. This is because the major impact of our risk of bias assessments is on our judgement of the certainty of the evidence (GRADE rating), which was also downgraded because of inconsistency. In other words, using a different risk of bias tool or giving different risk of bias ratings would not have changed our findings, only our confidence in them, and then only for some comparisons.</p><p>The authors state that ‘cohort studies [are] one of the most effective non-randomised study designs for determining cause and effect.’ However, reliance on a single type of study design undermines scientific robustness because of inherent biases associated with observational study designs (such as the threat of potentially unobserved common liabilities in cohort studies), resulting in specious causal claims based on replication of problematic studies [<span>4-6</span>]. We, therefore, advocate for triangulation of data from different study designs, an increasingly common approach in epidemiology, which is exactly what we did in this review. The use of instrumental variables and other types of quasi-experimental designs attempt to address unobserved common liabilities while evaluating real-world effects and can attempt to approximate counterfactuals to strengthen causal inference [<span>7</span>]. Whether they are able to do so, or not, is a reasonable question of interpretation and discussion, but in our opinion the solution is not to prioritize results from cohort studies over those from natural experiment studies. Rather, we acknowledge that each study design has different biases, which is a strength of triangulation. If consistent results are observed across different study designs, this provides greater confidence in the robustness of findings.</p><p>Our review concluded, verbatim, that ‘At an individual level, people who vape appear to be more likely to go on to smoke than people who do not vape; however, it is unclear if these behaviours are causally linked. Very low certainty evidence suggests that youth vaping and smoking could be inversely related.’ Regardless of our risk of bias assessments, these conclusions would still stand. Egger and McKee's conclusion that our review presents ‘unreliable evidence’ because of its risk of bias assessments is unsubstantiated and unfounded.</p><p>None.</p>","PeriodicalId":109,"journal":{"name":"Addiction","volume":"120 11","pages":"2359-2360"},"PeriodicalIF":5.3000,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/add.70166","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Addiction","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/add.70166","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0
Abstract
We are pleased that this review [1] is encouraging discussion on this important topic. As a team, we care greatly about scientific discourse and opportunities for methodological improvement. This is why we took such efforts to be transparent in the risk of bias methods used in this article, without which such critiques would not be possible. However, right from the outset we note a logical disconnect in the critique by Egger and McKee [2]. The title of their letter starts with ‘unreliable evidence’, yet the authors appear to focus solely on our risk of bias tool rather than on providing a substantive critique of the actual findings from the reviewed studies or our synthesis of them. Regardless, we welcome the conversation.
In accordance with best practice, our methods were pre-registered and approved through a Cochrane peer review process before our review began [3]. Of our many methodological choices, Egger and McKee focus exclusively on our risk of bias assessment. They rightly acknowledge that there is no consensus among researchers as to how risk of bias assessment should be conducted for ecological studies of the type we include. We, therefore, considered this carefully and developed a risk of bias assessment tool for evaluating population-level studies, which we pre-registered in detail on Open Science Framework (https://osf.io/svgud).
While some co-authors of included studies participated in developing the risk of bias assessment tool, we consider this a strength and this was appropriately disclosed at the time. Having people who understand the study types included in a systematic review is extremely helpful as it can ensure results are understood and interpreted correctly. Further, as per Cochrane methods, these individuals were not involved in data extraction or making risk of bias judgements for their own studies.
ROBINS-E, which arguably may be the most appropriate risk of bias assessment tool currently available, was not yet approved for use when we conducted our review. We acknowledge this as a limitation in our review, but note that even had we used a different tool, our findings would remain largely unchanged. This is because the major impact of our risk of bias assessments is on our judgement of the certainty of the evidence (GRADE rating), which was also downgraded because of inconsistency. In other words, using a different risk of bias tool or giving different risk of bias ratings would not have changed our findings, only our confidence in them, and then only for some comparisons.
The authors state that ‘cohort studies [are] one of the most effective non-randomised study designs for determining cause and effect.’ However, reliance on a single type of study design undermines scientific robustness because of inherent biases associated with observational study designs (such as the threat of potentially unobserved common liabilities in cohort studies), resulting in specious causal claims based on replication of problematic studies [4-6]. We, therefore, advocate for triangulation of data from different study designs, an increasingly common approach in epidemiology, which is exactly what we did in this review. The use of instrumental variables and other types of quasi-experimental designs attempt to address unobserved common liabilities while evaluating real-world effects and can attempt to approximate counterfactuals to strengthen causal inference [7]. Whether they are able to do so, or not, is a reasonable question of interpretation and discussion, but in our opinion the solution is not to prioritize results from cohort studies over those from natural experiment studies. Rather, we acknowledge that each study design has different biases, which is a strength of triangulation. If consistent results are observed across different study designs, this provides greater confidence in the robustness of findings.
Our review concluded, verbatim, that ‘At an individual level, people who vape appear to be more likely to go on to smoke than people who do not vape; however, it is unclear if these behaviours are causally linked. Very low certainty evidence suggests that youth vaping and smoking could be inversely related.’ Regardless of our risk of bias assessments, these conclusions would still stand. Egger and McKee's conclusion that our review presents ‘unreliable evidence’ because of its risk of bias assessments is unsubstantiated and unfounded.
期刊介绍:
Addiction publishes peer-reviewed research reports on pharmacological and behavioural addictions, bringing together research conducted within many different disciplines.
Its goal is to serve international and interdisciplinary scientific and clinical communication, to strengthen links between science and policy, and to stimulate and enhance the quality of debate. We seek submissions that are not only technically competent but are also original and contain information or ideas of fresh interest to our international readership. We seek to serve low- and middle-income (LAMI) countries as well as more economically developed countries.
Addiction’s scope spans human experimental, epidemiological, social science, historical, clinical and policy research relating to addiction, primarily but not exclusively in the areas of psychoactive substance use and/or gambling. In addition to original research, the journal features editorials, commentaries, reviews, letters, and book reviews.