Alison Pearce BAppSci(OT), MPH, PhD, Stacy Carter BAppSci, MPH(Hons), PhD, Helen ML Frazer MBBS, RANZCR, M Epi Biostat, GAICD, Nehmat Houssami MBBS (Hons), MPH, M Ed, FAFPHM, FASBP, PhD, Mary Macheras-Magias, Genevieve Webb, M. Luke Marinovich BA(Hons), MPH, PhD
{"title":"Implementing artificial intelligence in breast cancer screening: Women’s preferences","authors":"Alison Pearce BAppSci(OT), MPH, PhD, Stacy Carter BAppSci, MPH(Hons), PhD, Helen ML Frazer MBBS, RANZCR, M Epi Biostat, GAICD, Nehmat Houssami MBBS (Hons), MPH, M Ed, FAFPHM, FASBP, PhD, Mary Macheras-Magias, Genevieve Webb, M. Luke Marinovich BA(Hons), MPH, PhD","doi":"10.1002/cncr.35859","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Artificial intelligence (AI) could improve accuracy and efficiency of breast cancer screening. However, many women distrust AI in health care, potentially jeopardizing breast cancer screening participation rates. The aim was to quantify community preferences for models of AI implementation within breast cancer screening.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>An online discrete choice experiment survey of people eligible for breast cancer screening aged 40 to 74 years in Australia. Respondents answered 10 questions where they chose between two screening options created by an experimental design. Each screening option described the role of AI (supplementing current practice, replacing one radiologist, replacing both radiologists, or triaging), and the AI accuracy, ownership, representativeness, privacy, and waiting time. Analysis included conditional and latent class models, willingness-to-pay, and predicted screening uptake.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The 802 participants preferred screening where AI was more accurate, Australian owned, more representative and had shorter waiting time for results (all <i>p</i> < .001). There were strong preferences (<i>p</i> < .001) against AI alone or as triage. Three patterns of preferences emerged: positive about AI if accuracy improves (40% of sample), strongly against AI (42%), and concerned about AI (18%). Participants were willing to accept AI replacing one human reader if their results were available 10 days faster than current practice but would need results 21 days faster for AI as triage. Implementing AI inconsistent with community preferences could reduce participation by up to 22%.</p>\n </section>\n </div>","PeriodicalId":138,"journal":{"name":"Cancer","volume":"131 9","pages":""},"PeriodicalIF":6.1000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cncr.35859","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cancer","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cncr.35859","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Artificial intelligence (AI) could improve accuracy and efficiency of breast cancer screening. However, many women distrust AI in health care, potentially jeopardizing breast cancer screening participation rates. The aim was to quantify community preferences for models of AI implementation within breast cancer screening.
Methods
An online discrete choice experiment survey of people eligible for breast cancer screening aged 40 to 74 years in Australia. Respondents answered 10 questions where they chose between two screening options created by an experimental design. Each screening option described the role of AI (supplementing current practice, replacing one radiologist, replacing both radiologists, or triaging), and the AI accuracy, ownership, representativeness, privacy, and waiting time. Analysis included conditional and latent class models, willingness-to-pay, and predicted screening uptake.
Results
The 802 participants preferred screening where AI was more accurate, Australian owned, more representative and had shorter waiting time for results (all p < .001). There were strong preferences (p < .001) against AI alone or as triage. Three patterns of preferences emerged: positive about AI if accuracy improves (40% of sample), strongly against AI (42%), and concerned about AI (18%). Participants were willing to accept AI replacing one human reader if their results were available 10 days faster than current practice but would need results 21 days faster for AI as triage. Implementing AI inconsistent with community preferences could reduce participation by up to 22%.
期刊介绍:
The CANCER site is a full-text, electronic implementation of CANCER, an Interdisciplinary International Journal of the American Cancer Society, and CANCER CYTOPATHOLOGY, a Journal of the American Cancer Society.
CANCER publishes interdisciplinary oncologic information according to, but not limited to, the following disease sites and disciplines: blood/bone marrow; breast disease; endocrine disorders; epidemiology; gastrointestinal tract; genitourinary disease; gynecologic oncology; head and neck disease; hepatobiliary tract; integrated medicine; lung disease; medical oncology; neuro-oncology; pathology radiation oncology; translational research