Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Automated Deep Learning-based Segmentation of the Dentate Nucleus Using Quantitative Susceptibility Mapping MRI. 基于深度学习的齿状核定量敏感性成像自动分割。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-08-06 DOI: 10.1148/ryai.240478
Diogo H Shiraishi, Susmita Saha, Isaac M Adanyeguh, Sirio Cocozza, Louise A Corben, Andreas Deistung, Martin B Delatycki, Imis Dogan, William Gaetz, Nellie Georgiou-Karistianis, Simon Graf, Marina Grisoli, Pierre-Gilles Henry, Gustavo M Jarola, James M Joers, Christian Langkammer, Christophe Lenglet, Jiakun Li, Camila C Lobo, Eric F Lock, David R Lynch, Thomas H Mareci, Alberto R M Martinez, Serena Monti, Anna Nigri, Massimo Pandolfo, Kathrin Reetz, Timothy P Roberts, Sandro Romanzetti, David A Rudko, Alessandra Scaravilli, Jörg B Schulz, S H Subramony, Dagmar Timmann, Marcondes C França, Ian H Harding, Thiago J R Rezende
{"title":"Automated Deep Learning-based Segmentation of the Dentate Nucleus Using Quantitative Susceptibility Mapping MRI.","authors":"Diogo H Shiraishi, Susmita Saha, Isaac M Adanyeguh, Sirio Cocozza, Louise A Corben, Andreas Deistung, Martin B Delatycki, Imis Dogan, William Gaetz, Nellie Georgiou-Karistianis, Simon Graf, Marina Grisoli, Pierre-Gilles Henry, Gustavo M Jarola, James M Joers, Christian Langkammer, Christophe Lenglet, Jiakun Li, Camila C Lobo, Eric F Lock, David R Lynch, Thomas H Mareci, Alberto R M Martinez, Serena Monti, Anna Nigri, Massimo Pandolfo, Kathrin Reetz, Timothy P Roberts, Sandro Romanzetti, David A Rudko, Alessandra Scaravilli, Jörg B Schulz, S H Subramony, Dagmar Timmann, Marcondes C França, Ian H Harding, Thiago J R Rezende","doi":"10.1148/ryai.240478","DOIUrl":"10.1148/ryai.240478","url":null,"abstract":"<p><p>Purpose To develop a dentate nucleus (DN) segmentation tool using deep learning (DL) applied to brain MRI-based quantitative susceptibility mapping (QSM) images. Materials and Methods Brain QSM images from healthy controls and individuals with cerebellar ataxia or multiple sclerosis were collected from nine different datasets (2016-2023) worldwide for this retrospective study (ClinicalTrials.gov Identifier: NCT04349514). Manual delineation of the DN was performed by experienced raters. Automated segmentation performance was evaluated against manual reference segmentations following training with several DL architectures. A two-step approach was used, consisting of a localization model followed by DN segmentation. Performance metrics included intraclass correlation coefficient (ICC), Dice score, and Pearson correlation coefficient. Results The training and testing datasets comprised 328 individuals (age range, 11-64 years; 171 female), including 141 healthy individuals and 187 with cerebellar ataxia or multiple sclerosis. The manual tracing protocol produced reference standards with high intrarater (average ICC 0.91) and interrater reliability (average ICC 0.78). Initial DL architecture exploration indicated that the nnU-Net framework performed best. The two-step localization plus segmentation pipeline achieved a Dice score of 0.90 ± 0.03 and 0.89 ± 0.04 for left and right DN segmentation, respectively. In external testing, the proposed algorithm outperformed the current leading automated tool (mean Dice scores for left and right DN: 0.86 ± 0.04 vs 0.57 ± 0.22, <i>P</i> < .001; 0.84 ± 0.07 vs 0.58 ± 0.24, <i>P</i> < .001). The model demonstrated generalizability across datasets unseen during the training step, with automated segmentations showing high correlation with manual annotations (left DN: r = 0.74; <i>P</i> < .001; right DN: r = 0.48; <i>P</i> = .03). Conclusion The proposed model accurately and efficiently segmented the DN from brain QSM images. The model is publicly available (https://github.com/art2mri/DentateSeg). ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240478"},"PeriodicalIF":13.2,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Evolution of Radiology Image Annotation in the Era of Large Language Models. 大语言模型时代放射学图像标注的演变
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240631
Adam E Flanders, Xindi Wang, Carol C Wu, Felipe C Kitamura, George Shih, John Mongan, Yifan Peng
{"title":"The Evolution of Radiology Image Annotation in the Era of Large Language Models.","authors":"Adam E Flanders, Xindi Wang, Carol C Wu, Felipe C Kitamura, George Shih, John Mongan, Yifan Peng","doi":"10.1148/ryai.240631","DOIUrl":"10.1148/ryai.240631","url":null,"abstract":"<p><p>Although there are relatively few diverse, high-quality medical imaging datasets on which to train computer vision artificial intelligence models, even fewer datasets contain expertly classified observations that can be repurposed to train or test such models. The traditional annotation process is laborious and time-consuming. Repurposing annotations and consolidating similar types of annotations from disparate sources has never been practical. Until recently, the use of natural language processing to convert a clinical radiology report into labels required custom training of a language model for each use case. Newer technologies such as large language models have made it possible to generate accurate and normalized labels at scale, using only clinical reports and specific prompt engineering. The combination of automatically generated labels extracted and normalized from reports in conjunction with foundational image models provides a means to create labels for model training. This article provides a short history and review of the annotation and labeling process of medical images, from the traditional manual methods to the newest semiautomated methods that provide a more scalable solution for creating useful models more efficiently. <b>Keywords:</b> Feature Detection, Diagnosis, Semi-supervised Learning © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240631"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319696/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking Robust Segmentation: Decoding Domain Randomization for Radiologists. 解锁鲁棒分割:解码领域随机化放射科医生。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.250384
John D Mayfield
{"title":"Unlocking Robust Segmentation: Decoding Domain Randomization for Radiologists.","authors":"John D Mayfield","doi":"10.1148/ryai.250384","DOIUrl":"10.1148/ryai.250384","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 4","pages":"e250384"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cybersecurity Threats and Mitigation Strategies for Large Language Models in Health Care. 医疗保健中大型语言模型的网络安全威胁和缓解策略。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240739
Tugba Akinci D'Antonoli, Ali S Tejani, Bardia Khosravi, Christian Bluethgen, Felix Busch, Keno K Bressem, Lisa C Adams, Mana Moassefi, Shahriar Faghani, Judy Wawira Gichoya
{"title":"Cybersecurity Threats and Mitigation Strategies for Large Language Models in Health Care.","authors":"Tugba Akinci D'Antonoli, Ali S Tejani, Bardia Khosravi, Christian Bluethgen, Felix Busch, Keno K Bressem, Lisa C Adams, Mana Moassefi, Shahriar Faghani, Judy Wawira Gichoya","doi":"10.1148/ryai.240739","DOIUrl":"10.1148/ryai.240739","url":null,"abstract":"<p><p>The integration of large language models (LLMs) into health care offers tremendous opportunities to improve medical practice and patient care. Besides being susceptible to biases and threats common to all artificial intelligence (AI) systems, LLMs pose unique cybersecurity risks that must be carefully evaluated before these AI models are deployed in health care. LLMs can be exploited in several ways, such as malicious attacks, privacy breaches, and unauthorized manipulation of patient data. Moreover, malicious actors could use LLMs to infer sensitive patient information from training data. Furthermore, manipulated or poisoned data fed into these models could change their results in a way that is beneficial for the malicious actors. This report presents the cybersecurity challenges posed by LLMs in health care and provides strategies for mitigation. By implementing robust security measures and adhering to best practices during the model development, training, and deployment stages, stakeholders can help minimize these risks and protect patient privacy. <b>Keywords:</b> Computer Applications-General (Informatics), Application Domain, Large Language Models, Artificial Intelligence, Cybersecurity © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240739"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143999117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning with Domain Randomization in Image and Feature Spaces for Abdominal Multiorgan Segmentation on CT and MRI Scans. 基于图像和特征空间随机化的深度学习用于腹部CT和MRI多器官分割。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240586
Yu Shi, Lixia Wang, Touseef Ahmad Qureshi, Zengtian Deng, Yibin Xie, Debiao Li
{"title":"Deep Learning with Domain Randomization in Image and Feature Spaces for Abdominal Multiorgan Segmentation on CT and MRI Scans.","authors":"Yu Shi, Lixia Wang, Touseef Ahmad Qureshi, Zengtian Deng, Yibin Xie, Debiao Li","doi":"10.1148/ryai.240586","DOIUrl":"10.1148/ryai.240586","url":null,"abstract":"<p><p>Purpose To develop a deep learning segmentation model that can segment abdominal organs on CT and MRI scans with high accuracy and generalization ability. Materials and Methods In this study, an extended nnU-Net model was trained for abdominal organ segmentation. A domain randomization method in both the image and feature space was developed to improve the generalization ability under cross-site and cross-modality settings on public prostate MRI and abdominal CT and MRI datasets. The prostate MRI dataset contains data from multiple health care institutions, with domain shifts. The abdominal CT and MRI dataset is structured for cross-modality evaluation: training on one modality (eg, MRI) and testing on the other (eg, CT). This domain randomization method was then used to train a segmentation model with enhanced generalization ability on the abdominal multiorgan segmentation challenge dataset to improve abdominal CT and MR multiorgan segmentation, and the model was compared with two commonly used segmentation algorithms (TotalSegmentator and MRSegmentator). Model performance was evaluated using the Dice similarity coefficient (DSC). Results The proposed domain randomization method showed improved generalization ability on the cross-site and cross-modality datasets compared with the state-of-the-art methods. The segmentation model using this method outperformed two other publicly available segmentation models on data from unseen test domains (mean DSC, 0.88 vs 0.79 [<i>P</i> < .001] and 0.88 vs 0.76 [<i>P</i> < .001]). Conclusion The combination of image and feature domain randomizations improved the accuracy and generalization ability of deep learning-based abdominal segmentation on CT and MR images. <b>Keywords:</b> Segmentation, CT, MR Imaging, Neural Networks, MRI, Domain Randomization <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Mayfield in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240586"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476582/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144112076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the Trade-off between Privacy and Utility in Medical Imaging Federated Learning. 医学影像联合学习中隐私与效用的权衡优化。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.250434
Zekai Yu
{"title":"Optimizing the Trade-off between Privacy and Utility in Medical Imaging Federated Learning.","authors":"Zekai Yu","doi":"10.1148/ryai.250434","DOIUrl":"10.1148/ryai.250434","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 4","pages":"e250434"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI to Measure Nuchal Translucency: Improved Speed and Accuracy, but Is It Still Relevant? 人工智能测量颈部透明度:提高速度和准确性,但它仍然相关吗?
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.250231
Steven C Horii
{"title":"AI to Measure Nuchal Translucency: Improved Speed and Accuracy, but Is It Still Relevant?","authors":"Steven C Horii","doi":"10.1148/ryai.250231","DOIUrl":"10.1148/ryai.250231","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 4","pages":"e250231"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144486217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Model for Real-Time Nuchal Translucency Assessment at Prenatal US. 实时产前颈部透明度评估的人工智能模型:高性能和工作流集成。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240498
Yuanji Zhang, Xin Yang, Chunya Ji, Xindi Hu, Yan Cao, Chaoyu Chen, He Sui, Binghan Li, Chaojiong Zhen, Weijun Huang, Xuedong Deng, Linliang Yin, Dong Ni
{"title":"Deep Learning Model for Real-Time Nuchal Translucency Assessment at Prenatal US.","authors":"Yuanji Zhang, Xin Yang, Chunya Ji, Xindi Hu, Yan Cao, Chaoyu Chen, He Sui, Binghan Li, Chaojiong Zhen, Weijun Huang, Xuedong Deng, Linliang Yin, Dong Ni","doi":"10.1148/ryai.240498","DOIUrl":"10.1148/ryai.240498","url":null,"abstract":"<p><p>Purpose To develop and evaluate an artificial intelligence-based model for real-time nuchal translucency (NT) plane identification and measurement in prenatal US assessments. Materials and Methods In this retrospective multicenter study conducted from January 2022 to October 2023, the Automated Identification and Measurement of NT (AIM-NT) model was developed and evaluated using internal and external datasets. NT plane assessment, including identification of the NT plane and measurement of NT thickness, was independently conducted by AIM-NT and experienced radiologists, with the results subsequently audited by radiology specialists and accuracy compared between groups. To assess alignment of artificial intelligence with radiologist workflow, discrepancies between the AIM-NT model and radiologists in NT plane identification time and thickness measurements were evaluated. Results The internal dataset included a total of 3959 NT images from 3153 fetuses, and the external dataset included 267 US videos from 267 fetuses. On the internal testing dataset, AIM-NT achieved an area under the receiver operating characteristic curve of 0.92 for NT plane identification. On the external testing dataset, there was no evidence of differences between AIM-NT and radiologists in NT plane identification accuracy (88.8% vs 87.6%, <i>P</i> = .69) or NT thickness measurements on standard and nonstandard NT planes (<i>P</i> = .29 and .59). AIM-NT demonstrated high consistency with radiologists in NT plane identification time, with 1-minute discrepancies observed in 77.9% of cases, and NT thickness measurements, with a mean difference of 0.03 mm and mean absolute error of 0.22 mm (95% CI: 0.19, 0.25). Conclusion AIM-NT demonstrated high accuracy in identifying the NT plane and measuring NT thickness on prenatal US images, showing minimal discrepancies with radiologist workflow. <b>Keywords:</b> Ultrasound, Fetus, Segmentation, Feature Detection, Diagnosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Horii in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240498"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143999114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RadioRAG: Online Retrieval-Augmented Generation for Radiology Question Answering. RadioRAG:放射学问答的在线检索增强生成。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240476
Soroosh Tayebi Arasteh, Mahshad Lotfinia, Keno Bressem, Robert Siepmann, Lisa Adams, Dyke Ferber, Christiane Kuhl, Jakob Nikolas Kather, Sven Nebelung, Daniel Truhn
{"title":"RadioRAG: Online Retrieval-Augmented Generation for Radiology Question Answering.","authors":"Soroosh Tayebi Arasteh, Mahshad Lotfinia, Keno Bressem, Robert Siepmann, Lisa Adams, Dyke Ferber, Christiane Kuhl, Jakob Nikolas Kather, Sven Nebelung, Daniel Truhn","doi":"10.1148/ryai.240476","DOIUrl":"10.1148/ryai.240476","url":null,"abstract":"<p><p>Purpose To evaluate diagnostic accuracy of various large language models (LLMs) when answering radiology-specific questions with and without access to additional online, up-to-date information via retrieval-augmented generation (RAG). Materials and Methods The authors developed radiology RAG (RadioRAG), an end-to-end framework that retrieves data from authoritative radiologic online sources in real-time. RAG incorporates information retrieval from external sources to supplement the initial prompt, grounding the model's response in relevant information. Using 80 questions from the RSNA Case Collection across radiologic subspecialties and 24 additional expert-curated questions with reference standard answers, LLMs (GPT-3.5-turbo [OpenAI], GPT-4, Mistral 7B, Mixtral 8×7B [Mistral], and Llama3-8B and -70B [Meta]) were prompted with and without RadioRAG in a zero-shot inference scenario (temperature ≤ 0.1, top-p = 1). RadioRAG retrieved context-specific information from <i>www.radiopaedia.org</i>. Accuracy of LLMs with and without RadioRAG in answering questions from each dataset was assessed. Statistical analyses were performed using bootstrapping while preserving pairing. Additional assessments included comparison of model with human performance and comparison of time required for conventional versus RadioRAG-powered question answering. Results RadioRAG improved accuracy for some LLMs, including GPT-3.5-turbo (74% [59 of 80] vs 66% [53 of 80], false discovery rate [FDR] = 0.03) and Mixtral 8×7B (76% [61 of 80] vs 65% [52 of 80], FDR = 0.02) on the RSNA radiology question answering (RSNA-RadioQA) dataset, with similar trends in the ExtendedQA dataset. Accuracy exceeded that of a human expert (63% [50 of 80], FDR ≤ 0.007) for these LLMs, although not for Mistral 7B-instruct-v0.2, Llama3-8B, and Llama3-70B (FDR ≥ 0.21). RadioRAG reduced hallucinations for all LLMs (rate, 6%-25%). RadioRAG increased estimated response time fourfold. Conclusion RadioRAG shows potential to improve LLM accuracy and factuality in radiology QA by integrating real-time, domain-specific data. <b>Keywords:</b> Retrieval-augmented Generation, Informatics, Computer-aided Diagnosis, Large Language Models <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240476"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12326075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving Federated Learning and Uncertainty Quantification in Medical Imaging. 医学影像中隐私保护的联邦学习和不确定性量化。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240637
Nikolas Koutsoubis, Asim Waqas, Yasin Yilmaz, Ravi P Ramachandran, Matthew B Schabath, Ghulam Rasool
{"title":"Privacy-preserving Federated Learning and Uncertainty Quantification in Medical Imaging.","authors":"Nikolas Koutsoubis, Asim Waqas, Yasin Yilmaz, Ravi P Ramachandran, Matthew B Schabath, Ghulam Rasool","doi":"10.1148/ryai.240637","DOIUrl":"10.1148/ryai.240637","url":null,"abstract":"<p><p>Artificial intelligence (AI) has demonstrated strong potential in automating medical imaging tasks, with potential applications across disease diagnosis, prognosis, treatment planning, and posttreatment surveillance. However, privacy concerns surrounding patient data remain a major barrier to the widespread adoption of AI in clinical practice, because large and diverse training datasets are essential for developing accurate, robust, and generalizable AI models. Federated learning offers a privacy-preserving solution by enabling collaborative model training across institutions without sharing sensitive data. Instead, model parameters, such as model weights, are exchanged between participating sites. Despite its potential, federated learning is still in its early stages of development and faces several challenges. Notably, sensitive information can still be inferred from the shared model parameters. Additionally, postdeployment data distribution shifts can degrade model performance, making uncertainty quantification essential. In federated learning, this task is particularly challenging due to data heterogeneity across participating sites. This review provides a comprehensive overview of federated learning, privacy-preserving federated learning, and uncertainty quantification in federated learning. Key limitations in current methodologies are identified, and future research directions are proposed to enhance data privacy and trustworthiness in medical imaging applications. <b>Keywords:</b> Supervised Learning, Perception, Neural Networks, Radiology-Pathology Integration <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240637"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信