Lauren Gatting PhD , Charlotte Kelley Jones PhD , Babak Jamshidi PhD , Angie A. Kehagia PhD , Jo Waller PhD
{"title":"Acceptability of Using Artificial Intelligence in the National Health Service Breast Screening Program: A Randomized Online Survey of Screening-Eligible Women in England","authors":"Lauren Gatting PhD , Charlotte Kelley Jones PhD , Babak Jamshidi PhD , Angie A. Kehagia PhD , Jo Waller PhD","doi":"10.1016/j.mcpdig.2025.100329","DOIUrl":"10.1016/j.mcpdig.2025.100329","url":null,"abstract":"<div><h3>Objective</h3><div>To compare acceptability of 2 artificial intelligence (AI) use cases in the English National Health Servic Breast Screening Program.</div></div><div><h3>Patients and Methods</h3><div>From February 7 to March 14 2024, we conducted an online survey, randomizing participants to information about using AI either as the second mammogram reader or to triage mammograms. In the triage scenario, only higher-risk images would be reviewed by a human reader. The survey was completed by 3419 women aged 45 to 70 years, recruited from an online panel. The primary outcome was acceptability of the presented AI use case. We assessed a range of psychological and demographic factors. Regression modeling examined predictors of acceptability.</div></div><div><h3>Results</h3><div>Using AI as a second reader was rated as more acceptable (<em>P</em><.001), less concerning (<em>P</em><.001), and less likely to put people off screening (<em>P</em><em>=</em>.001) than using it as a triage tool. In both groups, most women said AI would not affect their breast screening attendance (1251/1710 [73%] and 1195/1709 [70%] in the second reader and triage groups, respectively). Nevertheless, 15% (498/3419) of participants stated that the use of AI would make them less likely to attend. After adjusting for AI use case, acceptability was higher in respondents of older age, White ethnicity, higher education, greater AI knowledge, and with more positive attitudes toward both AI and breast screening.</div></div><div><h3>Conclusion</h3><div>Artificial intelligence in breast screening was rated as more acceptable if used alongside, rather than instead of, a human reader. Ongoing careful evaluation is needed to ensure its roll-out does not widen existing social inequalities and that the risk-benefit profile of screening is maintained.</div></div>","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"4 1","pages":"Article 100329"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deniz Koksal MD , Arunkumar Govindarajan MD , Hari Kishan Gonuguntla MD , Sibel Nayci MD , Ricardo Cordova MD , Mohamed Helmy Zidan MD , Susan McCutcheon PhD , Ashwini Saha MBBS , Pushpalatha Kantharaju BAMS , Sagar Sen BTech , Rohitashva Agrawal MPH , Laksmi Wulandari PhD
{"title":"Evaluation of an Artificial Intelligence Defined Lung Nodule Malignancy Score in Incidental Pulmonary Nodules: The CREATE Study","authors":"Deniz Koksal MD , Arunkumar Govindarajan MD , Hari Kishan Gonuguntla MD , Sibel Nayci MD , Ricardo Cordova MD , Mohamed Helmy Zidan MD , Susan McCutcheon PhD , Ashwini Saha MBBS , Pushpalatha Kantharaju BAMS , Sagar Sen BTech , Rohitashva Agrawal MPH , Laksmi Wulandari PhD","doi":"10.1016/j.mcpdig.2026.100335","DOIUrl":"10.1016/j.mcpdig.2026.100335","url":null,"abstract":"<div><h3>Objective</h3><div>To evaluate the effectiveness of the artificial intelligence–based qXR lung nodule malignancy score (qXR-LNMS) in detecting high-risk incidental pulmonary nodules (IPNs) on chest X-rays (CXRs).</div></div><div><h3>Patients and Methods</h3><div>The CREATE (NCT05817110), a prospective, observational study for participants aged 35 years or older with IPN (size, ≥8 to ≤30 mm) on CXR, enrolled 712 participants (high-risk: 498 and low-risk: 214) between April 1, 2023, and December 31, 2024. Participants were flagged by the Food and Drug Administration–cleared qXR detection algorithm and confirmed by radiologists. Threshold for success was set at 20% for positive predictive value (PPV) and 70% for negative predictive value (NPV). The primary and secondary outcomes included PPV and NPV of qXR-LNMS against the risk of malignancy assessed by radiologists using low-dose computed tomography (LDCT) and binarized risk categories based on Lung-RADS score and Mayo Clinic model and PPVs and NPVs by clinicodemographic characteristics with 95% CIs using Wilson score method.</div></div><div><h3>Results</h3><div>Overall, the PPV and the NPV of qXR-LNMS risk prediction against radiologists’ assessment on LDCT were 54.2% (95% CI, 49.8-58.5) and 93.5% (95% CI, 89.3-96.1), respectively. The agreement between Mayo Clinic model and qXR-LNMS was observed in 70.6% participants (Spearman correlation, 0.247). Results across key subgroups were consistent with all PPV and NPV point estimates crossing the prespecified threshold.</div></div><div><h3>Conclusion</h3><div>The results demonstrate the potential of qXR-LNMS in predicting benign and malignant IPN on CXR, thereby supporting lung cancer screening, particularly in resource-limited settings, although further validation is needed.</div></div><div><h3>Trials Registration</h3><div><span><span>clinicaltrials.gov</span><svg><path></path></svg></span> Identifier: NCT05817110</div></div>","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"4 1","pages":"Article 100335"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yixi Xu PhD, Rahul Dodhia PhD, Juan M. Lavista Ferres PhD, MS, William B. Weeks MD, PhD, MBA
{"title":"Artificial Intelligence Research as a Continuous Clinical Service","authors":"Yixi Xu PhD, Rahul Dodhia PhD, Juan M. Lavista Ferres PhD, MS, William B. Weeks MD, PhD, MBA","doi":"10.1016/j.mcpdig.2025.100330","DOIUrl":"10.1016/j.mcpdig.2025.100330","url":null,"abstract":"","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"4 1","pages":"Article 100330"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ian Io Lei MD , Wojciech Marlicz MD, PhD , Ramesh P. Arasaradnam MD, PhD , Anastasios Koulaouzidis MD, PhD
{"title":"Clinical Reformation in the Age of Artificial Intelligence: Safeguarding the Ethical Centre of Medicine","authors":"Ian Io Lei MD , Wojciech Marlicz MD, PhD , Ramesh P. Arasaradnam MD, PhD , Anastasios Koulaouzidis MD, PhD","doi":"10.1016/j.mcpdig.2025.100310","DOIUrl":"10.1016/j.mcpdig.2025.100310","url":null,"abstract":"","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"4 1","pages":"Article 100310"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason D. Greenwood MD, MS , Marc R. Matthews MD , Joshua D. Overgaard MD , Joshua W. Ohde PhD
{"title":"Maximizing Efficiency of Artificial Intelligence-Enabled Ambient Scribes in Outpatient Settings: A Pragmatic Approach to Structuring the Patient Appointment","authors":"Jason D. Greenwood MD, MS , Marc R. Matthews MD , Joshua D. Overgaard MD , Joshua W. Ohde PhD","doi":"10.1016/j.mcpdig.2026.100339","DOIUrl":"10.1016/j.mcpdig.2026.100339","url":null,"abstract":"","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"4 1","pages":"Article 100339"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zerubabel Desita MD , Temesgen Tadesse MD , Anders Solitander Bohlbro MD , Armando Sifna MD , Hikma Fekadu MD , Segenet Bizuneh MD , Sruti Sridhar , Dennis Robert , Manoj Tadepalli , Christian Wejse , Thomas Schön , Frauke Rudolf MD, PhD
{"title":"Computer-Aided Analysis of Photographed Chest X-Ray Films Performs Well Compared With Trained Radiologists","authors":"Zerubabel Desita MD , Temesgen Tadesse MD , Anders Solitander Bohlbro MD , Armando Sifna MD , Hikma Fekadu MD , Segenet Bizuneh MD , Sruti Sridhar , Dennis Robert , Manoj Tadepalli , Christian Wejse , Thomas Schön , Frauke Rudolf MD, PhD","doi":"10.1016/j.mcpdig.2026.100338","DOIUrl":"10.1016/j.mcpdig.2026.100338","url":null,"abstract":"<div><h3>Objective</h3><div>To assess whether computer-aided detection (CAD) chest X-ray (CXR) software may aid physicians in low-resource, high tuberculosis (TB) endemic settings where radiologists are scarce.</div></div><div><h3>Patients and Methods</h3><div>A retrospective pilot study was conducted on CXR films taken between January 1, 2017, and March 30, 2018, in Guinea-Bissau and Ethiopia to compare the interpretation of CXRs regarding pulmonary TB (PTB) by CAD (qXR; Qure.ai) with that of 2 experienced Ethiopian radiologists (A and B). To improve the applicability of this method in low-resource settings, an analysis was performed on images of CXRs taken by mobile phones. Two reference standards were applied: final PTB diagnosis by clinical or laboratory findings (ie, Xpert MTB/RIF [Xpert]-confirmed PTB).</div></div><div><h3>Results</h3><div>We included 498 CXRs from patients seeking help for TB indicative symptoms. Radiologist A identified 50, radiologist B identified 99, and the software identified 81 as indicative of TB. The overall area under the curve for the receiver-operating characteristic curve of the software was 0.84 for Xpert-confirmed cases. At the prechosen cutoff value of 0.5, the sensitivity of CAD CXR was 76.5%, and the specificity was 85.9%. Radiologist A’s assessments were 64.7% sensitive and 91.9% specific, whereas radiologist B’s assessments were 76.5% sensitive and 82.3% specific for Xpert-confirmed cases. The agreement regarding TB-related findings between the radiologists combined (κ=0.45) and each radiologist and the software (κ=0.56) was moderate.</div></div><div><h3>Conclusion</h3><div>Our study revealed that CAD CXR performs comparably with experienced radiologists when it is applied to CXR films, photographed by mobile phones and a digital camera with similar sensor resolutions.</div></div><div><h3>Trial registration</h3><div>PACTR201611001838365.</div></div>","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"4 1","pages":"Article 100338"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147357927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Olei MD , Gary Sarwin MSc , Victor E. Staartjes MD, PhD , Luca Zanuttini MD , Seungjun Ryu MD , Luca Regli MD , Ender Konukoglu PhD , Carlo Serra MD
{"title":"The AENEAS Project: Intraoperative Anatomical Guidance Through Real-Time Landmark Detection Using Machine Vision","authors":"Simone Olei MD , Gary Sarwin MSc , Victor E. Staartjes MD, PhD , Luca Zanuttini MD , Seungjun Ryu MD , Luca Regli MD , Ender Konukoglu PhD , Carlo Serra MD","doi":"10.1016/j.mcpdig.2025.100308","DOIUrl":"10.1016/j.mcpdig.2025.100308","url":null,"abstract":"<div><h3>Objective</h3><div>To investigate the performance of a deep learning machine vision-based model in identifying anatomical landmarks in a complex microsurgical setting, such as the pterional trans-Sylvian approach.</div></div><div><h3>Patients and Methods</h3><div>We developed a deep learning object detection model (YOLOv7x) trained on 5307 labeled frames from 78 surgical videos of 76 patients undergoing pterional trans-Sylvian approach from January 1, 2020 to June 31, 2024. Surgical steps were standardized, and key anatomical targets—frontal/temporal dura, inferior frontal/superior temporal gyri, optic and olfactory nerves, and internal carotid artery—were annotated by specifically trained neurosurgical residents and verified by the operating surgeon. Bounding boxes derived from segmentation masks served as training inputs. Performance was evaluated using 5-fold cross-validation.</div></div><div><h3>Results</h3><div>The model achieved promising detection performance for deep structures, particularly the optic nerve (average precision at an intersection over union threshold of 0.50 [AP<sub>50</sub>]: 0.73) and internal carotid artery (AP<sub>50</sub>: 0.67). Superficial structures, like dura and cortical gyri, had lower precision (AP<sub>50</sub> range: 0.25-0.45), likely due to morphological similarity and optical variability. Performance variability across classes reflects the complexity of the anatomical setting along with data limitations.</div></div><div><h3>Conclusion</h3><div>Applying machine vision techniques for anatomical detection in a complex neurosurgical setting is feasible. Although challenges remain in detecting less distinctive structures, the high accuracy achieved for deep anatomical landmarks validates this approach. This study marks an essential step toward the development of machine vision-powered anatomical recognition tools, with the prospective goal of improving intraoperative orientation and reducing variability among surgeons.</div></div>","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"4 1","pages":"Article 100308"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145841147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Busisiwe Mlambo MD , Mallory Shields PhD , Simon Bach MD , Armin Bauer PhD , Andrew Hung MD , Omar Yusef Kudsi MD , Felix Neis MD , John Lazar MD , Daniel Oh MD , Robert Perez MD , Seth Rosen MD , Naeem Soomro MD , Michael Stany MD , Mark Tousignant MD , Christian Wagner MD , Ken Whaler MS , Lilia Purvis MS , Benjamin Mueller BS , Sadia Yousaf MD , Casey Troxler BS , Anthony Jarc PhD
{"title":"A Standardized Temporal Segmentation Framework and Annotation Resource Library in Robotic Surgery","authors":"Busisiwe Mlambo MD , Mallory Shields PhD , Simon Bach MD , Armin Bauer PhD , Andrew Hung MD , Omar Yusef Kudsi MD , Felix Neis MD , John Lazar MD , Daniel Oh MD , Robert Perez MD , Seth Rosen MD , Naeem Soomro MD , Michael Stany MD , Mark Tousignant MD , Christian Wagner MD , Ken Whaler MS , Lilia Purvis MS , Benjamin Mueller BS , Sadia Yousaf MD , Casey Troxler BS , Anthony Jarc PhD","doi":"10.1016/j.mcpdig.2025.100257","DOIUrl":"10.1016/j.mcpdig.2025.100257","url":null,"abstract":"<div><h3>Objective</h3><div>To develop and share the first clinical temporal annotation guide library for 10 robotic procedures accompanied with a standardized ontology framework for surgical video annotation.</div></div><div><h3>Patients and Methods</h3><div>A standardized temporal annotation framework of surgical videos paired with consistent, procedure-specific annotation guides is critical to enable comparisons of surgical insights and facilitate large-scale insights for exceptional surgical practice. Existing ontologies and guidance not only provide foundational frameworks but also provide limited scalability in clinical settings. Building on these, we developed a temporal annotation framework with nested surgical phases, steps, tasks, and subtasks. Procedure-specific annotation resource guides consistent with this framework that define each surgical segment with formulaic start and stop parameters and surgical objectives were iteratively created across 7 years (January 1, 2018, to January 1, 2025) through global research collaborations with surgeon researchers and industry scientists.</div></div><div><h3>Results</h3><div>We provide the first resource library of annotation guides for 10 common robotic procedures consistent with our proposed temporal annotation framework, enabling consistent annotations for clinicians and large-scale data comparisons with computer-readable examples. These have been used in over 13,000 annotated surgical cases globally, demonstrating reproducibility and broad applicability.</div></div><div><h3>Conclusion</h3><div>This resource library and accompanying ontology framework provide critical structure for standardized temporal segmentation in robotic surgery. This framework has been applied globally in private studies examining surgical objective performance metrics, surgical education, workflow characterization, outcome prediction, algorithms for surgical activity recognition, and more. Adoption of these resources will unify clinical, academic, and industry efforts, ultimately catalyzing transformational advancements in surgical practice.</div></div>","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"3 4","pages":"Article 100257"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145109429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Barriers to Radiomics Adoption for Urological Cancer Diagnosis in Low-Income and Middle-Income Countries: A Perspective from Pakistan","authors":"Awais Ayub MBBS, Hanan Mudassar MBBS, Maida Rizwan MBBS","doi":"10.1016/j.mcpdig.2025.100262","DOIUrl":"10.1016/j.mcpdig.2025.100262","url":null,"abstract":"","PeriodicalId":74127,"journal":{"name":"Mayo Clinic Proceedings. Digital health","volume":"3 4","pages":"Article 100262"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}