Eeva-Liisa Røssell, Jakob Hansen Viuff, Mette Lise Lousdal, Henrik Støvring
{"title":"Investigating Bias in the Evaluation Model Used to Evaluate the Effect of Breast Cancer Screening: A Simulation Study.","authors":"Eeva-Liisa Røssell, Jakob Hansen Viuff, Mette Lise Lousdal, Henrik Støvring","doi":"10.1177/0272989X251352570","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background.</b> Observational studies are used to evaluate the effect of breast cancer screening programs, but their validity depends on use of different study designs. One of these is the evaluation model, which extends follow-up after screening only if women have been diagnosed with breast cancer during the screening program. However, to avoid lead-time bias, the inclusion of risk time should be based on screening invitation and not breast cancer diagnosis. The aim of this study is to investigate potential bias induced by the evaluation model. <b>Methods.</b> We used large-scale simulated datasets to investigate the evaluation model. Simulation model parameters for age-dependent breast cancer incidence, survival, breast cancer mortality, and all-cause mortality were obtained from Norwegian registries. Data were restricted to women aged 48 to 90 y and a period before screening implementation, 1986 to 1995. Simulation parameters were estimated for each of 2 periods (1986-1990 and 1991-1995). For the simulated datasets, 50% were randomly assigned to screening and 50% were not. Simulation scenarios depended on the magnitude of screening effect and level of overdiagnosis. For each scenario, we applied 2 study designs, the evaluation model and ordinary incidence-based mortality, to estimate breast cancer mortality rates for the screening and nonscreening groups. For each design, these rates were compared to assess potential bias. <b>Results.</b> In scenarios with no screening effect and no overdiagnosis, the evaluation model estimated 6% to 8% reductions in breast cancer mortality due to lead-time bias. Bias increased with overdiagnosis. <b>Conclusions.</b> The evaluation model was biased by lead time, especially in scenarios with overdiagnosis. Thus, the attempt to capture more of the screening effect using the evaluation model comes at the risk of introducing bias.HighlightsThe validity of observational studies of breast cancer screening programs depends on their study design being able to eliminate lead-time bias.The evaluation model has been used to evaluate breast cancer screening in recent studies but introduces a study design based on breast cancer diagnosis that may introduce lead-time bias.We used large-scale simulated datasets to compare study designs used to evaluate screening.We found that the evaluation model was biased by lead time and estimated reductions in breast cancer mortality in scenarios with no screening effect.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":" ","pages":"1025-1033"},"PeriodicalIF":3.1000,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Decision Making","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/0272989X251352570","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/11 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Background. Observational studies are used to evaluate the effect of breast cancer screening programs, but their validity depends on use of different study designs. One of these is the evaluation model, which extends follow-up after screening only if women have been diagnosed with breast cancer during the screening program. However, to avoid lead-time bias, the inclusion of risk time should be based on screening invitation and not breast cancer diagnosis. The aim of this study is to investigate potential bias induced by the evaluation model. Methods. We used large-scale simulated datasets to investigate the evaluation model. Simulation model parameters for age-dependent breast cancer incidence, survival, breast cancer mortality, and all-cause mortality were obtained from Norwegian registries. Data were restricted to women aged 48 to 90 y and a period before screening implementation, 1986 to 1995. Simulation parameters were estimated for each of 2 periods (1986-1990 and 1991-1995). For the simulated datasets, 50% were randomly assigned to screening and 50% were not. Simulation scenarios depended on the magnitude of screening effect and level of overdiagnosis. For each scenario, we applied 2 study designs, the evaluation model and ordinary incidence-based mortality, to estimate breast cancer mortality rates for the screening and nonscreening groups. For each design, these rates were compared to assess potential bias. Results. In scenarios with no screening effect and no overdiagnosis, the evaluation model estimated 6% to 8% reductions in breast cancer mortality due to lead-time bias. Bias increased with overdiagnosis. Conclusions. The evaluation model was biased by lead time, especially in scenarios with overdiagnosis. Thus, the attempt to capture more of the screening effect using the evaluation model comes at the risk of introducing bias.HighlightsThe validity of observational studies of breast cancer screening programs depends on their study design being able to eliminate lead-time bias.The evaluation model has been used to evaluate breast cancer screening in recent studies but introduces a study design based on breast cancer diagnosis that may introduce lead-time bias.We used large-scale simulated datasets to compare study designs used to evaluate screening.We found that the evaluation model was biased by lead time and estimated reductions in breast cancer mortality in scenarios with no screening effect.
期刊介绍:
Medical Decision Making offers rigorous and systematic approaches to decision making that are designed to improve the health and clinical care of individuals and to assist with health care policy development. Using the fundamentals of decision analysis and theory, economic evaluation, and evidence based quality assessment, Medical Decision Making presents both theoretical and practical statistical and modeling techniques and methods from a variety of disciplines.