用于评估乳腺癌筛查效果的评估模型的调查偏差:一项模拟研究。

IF 3.1 3区 医学 Q2 HEALTH CARE SCIENCES & SERVICES
Medical Decision Making Pub Date : 2025-11-01 Epub Date: 2025-08-11 DOI:10.1177/0272989X251352570
Eeva-Liisa Røssell, Jakob Hansen Viuff, Mette Lise Lousdal, Henrik Støvring
{"title":"用于评估乳腺癌筛查效果的评估模型的调查偏差:一项模拟研究。","authors":"Eeva-Liisa Røssell, Jakob Hansen Viuff, Mette Lise Lousdal, Henrik Støvring","doi":"10.1177/0272989X251352570","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background.</b> Observational studies are used to evaluate the effect of breast cancer screening programs, but their validity depends on use of different study designs. One of these is the evaluation model, which extends follow-up after screening only if women have been diagnosed with breast cancer during the screening program. However, to avoid lead-time bias, the inclusion of risk time should be based on screening invitation and not breast cancer diagnosis. The aim of this study is to investigate potential bias induced by the evaluation model. <b>Methods.</b> We used large-scale simulated datasets to investigate the evaluation model. Simulation model parameters for age-dependent breast cancer incidence, survival, breast cancer mortality, and all-cause mortality were obtained from Norwegian registries. Data were restricted to women aged 48 to 90 y and a period before screening implementation, 1986 to 1995. Simulation parameters were estimated for each of 2 periods (1986-1990 and 1991-1995). For the simulated datasets, 50% were randomly assigned to screening and 50% were not. Simulation scenarios depended on the magnitude of screening effect and level of overdiagnosis. For each scenario, we applied 2 study designs, the evaluation model and ordinary incidence-based mortality, to estimate breast cancer mortality rates for the screening and nonscreening groups. For each design, these rates were compared to assess potential bias. <b>Results.</b> In scenarios with no screening effect and no overdiagnosis, the evaluation model estimated 6% to 8% reductions in breast cancer mortality due to lead-time bias. Bias increased with overdiagnosis. <b>Conclusions.</b> The evaluation model was biased by lead time, especially in scenarios with overdiagnosis. Thus, the attempt to capture more of the screening effect using the evaluation model comes at the risk of introducing bias.HighlightsThe validity of observational studies of breast cancer screening programs depends on their study design being able to eliminate lead-time bias.The evaluation model has been used to evaluate breast cancer screening in recent studies but introduces a study design based on breast cancer diagnosis that may introduce lead-time bias.We used large-scale simulated datasets to compare study designs used to evaluate screening.We found that the evaluation model was biased by lead time and estimated reductions in breast cancer mortality in scenarios with no screening effect.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"1025-1033"},"PeriodicalIF":3.1000,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating Bias in the Evaluation Model Used to Evaluate the Effect of Breast Cancer Screening: A Simulation Study.\",\"authors\":\"Eeva-Liisa Røssell, Jakob Hansen Viuff, Mette Lise Lousdal, Henrik Støvring\",\"doi\":\"10.1177/0272989X251352570\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><b>Background.</b> Observational studies are used to evaluate the effect of breast cancer screening programs, but their validity depends on use of different study designs. One of these is the evaluation model, which extends follow-up after screening only if women have been diagnosed with breast cancer during the screening program. However, to avoid lead-time bias, the inclusion of risk time should be based on screening invitation and not breast cancer diagnosis. The aim of this study is to investigate potential bias induced by the evaluation model. <b>Methods.</b> We used large-scale simulated datasets to investigate the evaluation model. Simulation model parameters for age-dependent breast cancer incidence, survival, breast cancer mortality, and all-cause mortality were obtained from Norwegian registries. Data were restricted to women aged 48 to 90 y and a period before screening implementation, 1986 to 1995. Simulation parameters were estimated for each of 2 periods (1986-1990 and 1991-1995). For the simulated datasets, 50% were randomly assigned to screening and 50% were not. Simulation scenarios depended on the magnitude of screening effect and level of overdiagnosis. For each scenario, we applied 2 study designs, the evaluation model and ordinary incidence-based mortality, to estimate breast cancer mortality rates for the screening and nonscreening groups. For each design, these rates were compared to assess potential bias. <b>Results.</b> In scenarios with no screening effect and no overdiagnosis, the evaluation model estimated 6% to 8% reductions in breast cancer mortality due to lead-time bias. Bias increased with overdiagnosis. <b>Conclusions.</b> The evaluation model was biased by lead time, especially in scenarios with overdiagnosis. Thus, the attempt to capture more of the screening effect using the evaluation model comes at the risk of introducing bias.HighlightsThe validity of observational studies of breast cancer screening programs depends on their study design being able to eliminate lead-time bias.The evaluation model has been used to evaluate breast cancer screening in recent studies but introduces a study design based on breast cancer diagnosis that may introduce lead-time bias.We used large-scale simulated datasets to compare study designs used to evaluate screening.We found that the evaluation model was biased by lead time and estimated reductions in breast cancer mortality in scenarios with no screening effect.</p>\",\"PeriodicalId\":49839,\"journal\":{\"name\":\"Medical Decision Making\",\"volume\":\" \",\"pages\":\"1025-1033\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical Decision Making\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/0272989X251352570\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/8/11 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Decision Making","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/0272989X251352570","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/11 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

背景。观察性研究用于评估乳腺癌筛查项目的效果,但其有效性取决于使用不同的研究设计。其中之一是评估模式,只有在筛查过程中被诊断出患有乳腺癌的妇女才会在筛查后延长随访时间。然而,为了避免前置时间偏差,纳入风险时间应基于筛查邀请,而不是乳腺癌诊断。本研究的目的是探讨评估模型所引起的潜在偏倚。方法。我们使用大规模的模拟数据集来研究评估模型。年龄依赖性乳腺癌发病率、生存率、乳腺癌死亡率和全因死亡率的模拟模型参数来自挪威的登记处。数据仅限于48岁至90岁的妇女和1986年至1995年筛查实施前的一段时间。对两个时期(1986-1990年和1991-1995年)的模拟参数进行了估计。对于模拟数据集,50%被随机分配到筛选组,50%没有。模拟情景取决于筛选效果的大小和过度诊断的水平。对于每种情况,我们应用了2种研究设计,即评估模型和普通基于发病率的死亡率,来估计筛查组和非筛查组的乳腺癌死亡率。对于每个设计,比较这些比率以评估潜在的偏倚。结果。在没有筛查效果和没有过度诊断的情况下,评估模型估计由于前置时间偏差,乳腺癌死亡率降低了6%至8%。偏倚随着过度诊断而增加。结论。评估模型受提前期的影响有偏倚,特别是在过度诊断的情况下。因此,试图利用评估模型捕捉更多的筛选效应会带来引入偏见的风险。乳腺癌筛查项目的观察性研究的有效性取决于其研究设计是否能够消除前置时间偏差。该评估模型在最近的研究中被用于评估乳腺癌筛查,但引入了一种基于乳腺癌诊断的研究设计,可能会引入前置时间偏差。我们使用大规模模拟数据集来比较用于评估筛选的研究设计。我们发现评估模型因预诊时间和在没有筛查效果的情况下对乳腺癌死亡率降低的估计而存在偏差。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Investigating Bias in the Evaluation Model Used to Evaluate the Effect of Breast Cancer Screening: A Simulation Study.

Background. Observational studies are used to evaluate the effect of breast cancer screening programs, but their validity depends on use of different study designs. One of these is the evaluation model, which extends follow-up after screening only if women have been diagnosed with breast cancer during the screening program. However, to avoid lead-time bias, the inclusion of risk time should be based on screening invitation and not breast cancer diagnosis. The aim of this study is to investigate potential bias induced by the evaluation model. Methods. We used large-scale simulated datasets to investigate the evaluation model. Simulation model parameters for age-dependent breast cancer incidence, survival, breast cancer mortality, and all-cause mortality were obtained from Norwegian registries. Data were restricted to women aged 48 to 90 y and a period before screening implementation, 1986 to 1995. Simulation parameters were estimated for each of 2 periods (1986-1990 and 1991-1995). For the simulated datasets, 50% were randomly assigned to screening and 50% were not. Simulation scenarios depended on the magnitude of screening effect and level of overdiagnosis. For each scenario, we applied 2 study designs, the evaluation model and ordinary incidence-based mortality, to estimate breast cancer mortality rates for the screening and nonscreening groups. For each design, these rates were compared to assess potential bias. Results. In scenarios with no screening effect and no overdiagnosis, the evaluation model estimated 6% to 8% reductions in breast cancer mortality due to lead-time bias. Bias increased with overdiagnosis. Conclusions. The evaluation model was biased by lead time, especially in scenarios with overdiagnosis. Thus, the attempt to capture more of the screening effect using the evaluation model comes at the risk of introducing bias.HighlightsThe validity of observational studies of breast cancer screening programs depends on their study design being able to eliminate lead-time bias.The evaluation model has been used to evaluate breast cancer screening in recent studies but introduces a study design based on breast cancer diagnosis that may introduce lead-time bias.We used large-scale simulated datasets to compare study designs used to evaluate screening.We found that the evaluation model was biased by lead time and estimated reductions in breast cancer mortality in scenarios with no screening effect.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical Decision Making
Medical Decision Making 医学-卫生保健
CiteScore
6.50
自引率
5.60%
发文量
146
审稿时长
6-12 weeks
期刊介绍: Medical Decision Making offers rigorous and systematic approaches to decision making that are designed to improve the health and clinical care of individuals and to assist with health care policy development. Using the fundamentals of decision analysis and theory, economic evaluation, and evidence based quality assessment, Medical Decision Making presents both theoretical and practical statistical and modeling techniques and methods from a variety of disciplines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信