Response to comment from Egger and McKee entitled ‘Unreliable evidence from problematic risk of bias assessments: Comment on Begh et al., ‘Electronic cigarettes and subsequent cigarette smoking in young people: A systematic review”’

IF 5.3 1区 医学 Q1 PSYCHIATRY
Addiction Pub Date : 2025-08-12 DOI:10.1111/add.70166
Jamie Hartmann-Boyce, Rachna Begh, Monserrat Conde, Lion Shahab, Sarah E. Jackson, Michael F. Pesko, Jonathan Livingstone-Banks, Dimitra Kale, Nancy A. Rigotti, Thomas Fanshawe, Dylan Kneale, Nicola Lindson
{"title":"Response to comment from Egger and McKee entitled ‘Unreliable evidence from problematic risk of bias assessments: Comment on Begh et al., ‘Electronic cigarettes and subsequent cigarette smoking in young people: A systematic review”’","authors":"Jamie Hartmann-Boyce,&nbsp;Rachna Begh,&nbsp;Monserrat Conde,&nbsp;Lion Shahab,&nbsp;Sarah E. Jackson,&nbsp;Michael F. Pesko,&nbsp;Jonathan Livingstone-Banks,&nbsp;Dimitra Kale,&nbsp;Nancy A. Rigotti,&nbsp;Thomas Fanshawe,&nbsp;Dylan Kneale,&nbsp;Nicola Lindson","doi":"10.1111/add.70166","DOIUrl":null,"url":null,"abstract":"<p>We are pleased that this review [<span>1</span>] is encouraging discussion on this important topic. As a team, we care greatly about scientific discourse and opportunities for methodological improvement. This is why we took such efforts to be transparent in the risk of bias methods used in this article, without which such critiques would not be possible. However, right from the outset we note a logical disconnect in the critique by Egger and McKee [<span>2</span>]. The title of their letter starts with ‘unreliable evidence’, yet the authors appear to focus solely on our risk of bias tool rather than on providing a substantive critique of the actual findings from the reviewed studies or our synthesis of them. Regardless, we welcome the conversation.</p><p>In accordance with best practice, our methods were pre-registered and approved through a Cochrane peer review process before our review began [<span>3</span>]. Of our many methodological choices, Egger and McKee focus exclusively on our risk of bias assessment. They rightly acknowledge that there is no consensus among researchers as to how risk of bias assessment should be conducted for ecological studies of the type we include. We, therefore, considered this carefully and developed a risk of bias assessment tool for evaluating population-level studies, which we pre-registered in detail on Open Science Framework (https://osf.io/svgud).</p><p>While some co-authors of included studies participated in developing the risk of bias assessment tool, we consider this a strength and this was appropriately disclosed at the time. Having people who understand the study types included in a systematic review is extremely helpful as it can ensure results are understood and interpreted correctly. Further, as per Cochrane methods, these individuals were not involved in data extraction or making risk of bias judgements for their own studies.</p><p>ROBINS-E, which arguably may be the most appropriate risk of bias assessment tool currently available, was not yet approved for use when we conducted our review. We acknowledge this as a limitation in our review, but note that even had we used a different tool, our findings would remain largely unchanged. This is because the major impact of our risk of bias assessments is on our judgement of the certainty of the evidence (GRADE rating), which was also downgraded because of inconsistency. In other words, using a different risk of bias tool or giving different risk of bias ratings would not have changed our findings, only our confidence in them, and then only for some comparisons.</p><p>The authors state that ‘cohort studies [are] one of the most effective non-randomised study designs for determining cause and effect.’ However, reliance on a single type of study design undermines scientific robustness because of inherent biases associated with observational study designs (such as the threat of potentially unobserved common liabilities in cohort studies), resulting in specious causal claims based on replication of problematic studies [<span>4-6</span>]. We, therefore, advocate for triangulation of data from different study designs, an increasingly common approach in epidemiology, which is exactly what we did in this review. The use of instrumental variables and other types of quasi-experimental designs attempt to address unobserved common liabilities while evaluating real-world effects and can attempt to approximate counterfactuals to strengthen causal inference [<span>7</span>]. Whether they are able to do so, or not, is a reasonable question of interpretation and discussion, but in our opinion the solution is not to prioritize results from cohort studies over those from natural experiment studies. Rather, we acknowledge that each study design has different biases, which is a strength of triangulation. If consistent results are observed across different study designs, this provides greater confidence in the robustness of findings.</p><p>Our review concluded, verbatim, that ‘At an individual level, people who vape appear to be more likely to go on to smoke than people who do not vape; however, it is unclear if these behaviours are causally linked. Very low certainty evidence suggests that youth vaping and smoking could be inversely related.’ Regardless of our risk of bias assessments, these conclusions would still stand. Egger and McKee's conclusion that our review presents ‘unreliable evidence’ because of its risk of bias assessments is unsubstantiated and unfounded.</p><p>None.</p>","PeriodicalId":109,"journal":{"name":"Addiction","volume":"120 11","pages":"2359-2360"},"PeriodicalIF":5.3000,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/add.70166","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Addiction","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/add.70166","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0

Abstract

We are pleased that this review [1] is encouraging discussion on this important topic. As a team, we care greatly about scientific discourse and opportunities for methodological improvement. This is why we took such efforts to be transparent in the risk of bias methods used in this article, without which such critiques would not be possible. However, right from the outset we note a logical disconnect in the critique by Egger and McKee [2]. The title of their letter starts with ‘unreliable evidence’, yet the authors appear to focus solely on our risk of bias tool rather than on providing a substantive critique of the actual findings from the reviewed studies or our synthesis of them. Regardless, we welcome the conversation.

In accordance with best practice, our methods were pre-registered and approved through a Cochrane peer review process before our review began [3]. Of our many methodological choices, Egger and McKee focus exclusively on our risk of bias assessment. They rightly acknowledge that there is no consensus among researchers as to how risk of bias assessment should be conducted for ecological studies of the type we include. We, therefore, considered this carefully and developed a risk of bias assessment tool for evaluating population-level studies, which we pre-registered in detail on Open Science Framework (https://osf.io/svgud).

While some co-authors of included studies participated in developing the risk of bias assessment tool, we consider this a strength and this was appropriately disclosed at the time. Having people who understand the study types included in a systematic review is extremely helpful as it can ensure results are understood and interpreted correctly. Further, as per Cochrane methods, these individuals were not involved in data extraction or making risk of bias judgements for their own studies.

ROBINS-E, which arguably may be the most appropriate risk of bias assessment tool currently available, was not yet approved for use when we conducted our review. We acknowledge this as a limitation in our review, but note that even had we used a different tool, our findings would remain largely unchanged. This is because the major impact of our risk of bias assessments is on our judgement of the certainty of the evidence (GRADE rating), which was also downgraded because of inconsistency. In other words, using a different risk of bias tool or giving different risk of bias ratings would not have changed our findings, only our confidence in them, and then only for some comparisons.

The authors state that ‘cohort studies [are] one of the most effective non-randomised study designs for determining cause and effect.’ However, reliance on a single type of study design undermines scientific robustness because of inherent biases associated with observational study designs (such as the threat of potentially unobserved common liabilities in cohort studies), resulting in specious causal claims based on replication of problematic studies [4-6]. We, therefore, advocate for triangulation of data from different study designs, an increasingly common approach in epidemiology, which is exactly what we did in this review. The use of instrumental variables and other types of quasi-experimental designs attempt to address unobserved common liabilities while evaluating real-world effects and can attempt to approximate counterfactuals to strengthen causal inference [7]. Whether they are able to do so, or not, is a reasonable question of interpretation and discussion, but in our opinion the solution is not to prioritize results from cohort studies over those from natural experiment studies. Rather, we acknowledge that each study design has different biases, which is a strength of triangulation. If consistent results are observed across different study designs, this provides greater confidence in the robustness of findings.

Our review concluded, verbatim, that ‘At an individual level, people who vape appear to be more likely to go on to smoke than people who do not vape; however, it is unclear if these behaviours are causally linked. Very low certainty evidence suggests that youth vaping and smoking could be inversely related.’ Regardless of our risk of bias assessments, these conclusions would still stand. Egger and McKee's conclusion that our review presents ‘unreliable evidence’ because of its risk of bias assessments is unsubstantiated and unfounded.

None.

Abstract Image

对Egger和McKee题为“来自有问题的偏见风险评估的不可靠证据:对Begh等人的评论,“电子烟和随后吸烟的年轻人:系统回顾””的回应。
我们感到高兴的是,这次审查会议鼓励就这一重要议题进行讨论。作为一个团队,我们非常关心科学论述和方法改进的机会。这就是为什么我们在本文中使用的偏倚风险方法中采取了这样的努力是透明的,没有这样的批评是不可能的。然而,从一开始,我们就注意到埃格和麦基的批评中存在逻辑上的脱节。他们的信的标题以“不可靠的证据”开头,然而作者似乎只关注我们的偏见风险工具,而不是对审查研究的实际发现或我们对它们的综合提供实质性的批评。无论如何,我们欢迎对话。按照最佳实践,我们的方法在我们的评审开始之前,通过Cochrane同行评审程序进行了预注册和批准。在我们众多的方法选择中,Egger和McKee专门关注我们的偏见风险评估。他们正确地承认,对于如何对我们纳入的这类生态学研究进行偏倚风险评估,研究人员之间没有达成共识。因此,我们仔细考虑了这一点,并开发了一种评估人群水平研究的偏倚风险评估工具,我们在开放科学框架(https://osf.io/svgud)上预先详细注册了该工具。虽然纳入研究的一些共同作者参与了偏倚风险评估工具的开发,但我们认为这是一种优势,并在当时适当地披露了这一点。让了解研究类型的人参与系统评价是非常有帮助的,因为它可以确保结果被正确理解和解释。此外,根据Cochrane方法,这些人没有参与数据提取或对自己的研究做出偏见判断的风险。ROBINS-E可能是目前可用的最合适的偏倚风险评估工具,但在我们进行审查时尚未被批准使用。我们承认这是我们综述的一个局限性,但我们注意到,即使我们使用了不同的工具,我们的发现在很大程度上仍将保持不变。这是因为我们的偏倚风险评估的主要影响是我们对证据确定性的判断(GRADE评级),这也因为不一致而被降级。换句话说,使用不同的偏倚风险工具或给出不同的偏倚风险评级不会改变我们的发现,只会改变我们对它们的信心,然后只进行一些比较。作者指出,“队列研究是确定因果关系的最有效的非随机研究设计之一。”然而,依赖单一类型的研究设计破坏了科学的稳健性,因为与观察性研究设计相关的固有偏见(例如队列研究中潜在的未观察到的共同责任的威胁),导致基于重复问题研究的似是而非的因果主张[4-6]。因此,我们提倡对来自不同研究设计的数据进行三角测量,这是流行病学中越来越常见的方法,这正是我们在本综述中所做的。使用工具变量和其他类型的准实验设计试图在评估现实世界的影响时解决未观察到的共同责任,并可以尝试近似反事实以加强因果推理[7]。他们是否能够这样做,是一个合理的解释和讨论的问题,但在我们看来,解决方案不是优先考虑队列研究的结果,而不是自然实验研究的结果。相反,我们承认每个研究设计都有不同的偏差,这是三角法的优势。如果在不同的研究设计中观察到一致的结果,这就为研究结果的稳健性提供了更大的信心。我们一字不差地总结道:“就个人而言,吸电子烟的人似乎比不吸电子烟的人更有可能继续吸烟;然而,目前尚不清楚这些行为是否有因果关系。非常低确定性的证据表明,青少年吸电子烟和吸烟可能呈负相关。“尽管我们的评估存在偏倚风险,但这些结论仍然成立。埃格和麦基的结论是,我们的综述提供了“不可靠的证据”,因为它有偏见评估的风险,这是未经证实和没有根据的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Addiction
Addiction 医学-精神病学
CiteScore
10.80
自引率
6.70%
发文量
319
审稿时长
3 months
期刊介绍: Addiction publishes peer-reviewed research reports on pharmacological and behavioural addictions, bringing together research conducted within many different disciplines. Its goal is to serve international and interdisciplinary scientific and clinical communication, to strengthen links between science and policy, and to stimulate and enhance the quality of debate. We seek submissions that are not only technically competent but are also original and contain information or ideas of fresh interest to our international readership. We seek to serve low- and middle-income (LAMI) countries as well as more economically developed countries. Addiction’s scope spans human experimental, epidemiological, social science, historical, clinical and policy research relating to addiction, primarily but not exclusively in the areas of psychoactive substance use and/or gambling. In addition to original research, the journal features editorials, commentaries, reviews, letters, and book reviews.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信