Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Deep Learning Model for Real-Time Nuchal Translucency Assessment at Prenatal US. 实时产前颈部透明度评估的人工智能模型:高性能和工作流集成。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240498
Yuanji Zhang, Xin Yang, Chunya Ji, Xindi Hu, Yan Cao, Chaoyu Chen, He Sui, Binghan Li, Chaojiong Zhen, Weijun Huang, Xuedong Deng, Linliang Yin, Dong Ni
{"title":"Deep Learning Model for Real-Time Nuchal Translucency Assessment at Prenatal US.","authors":"Yuanji Zhang, Xin Yang, Chunya Ji, Xindi Hu, Yan Cao, Chaoyu Chen, He Sui, Binghan Li, Chaojiong Zhen, Weijun Huang, Xuedong Deng, Linliang Yin, Dong Ni","doi":"10.1148/ryai.240498","DOIUrl":"10.1148/ryai.240498","url":null,"abstract":"<p><p>Purpose To develop and evaluate an artificial intelligence-based model for real-time nuchal translucency (NT) plane identification and measurement in prenatal US assessments. Materials and Methods In this retrospective multicenter study conducted from January 2022 to October 2023, the Automated Identification and Measurement of NT (AIM-NT) model was developed and evaluated using internal and external datasets. NT plane assessment, including identification of the NT plane and measurement of NT thickness, was independently conducted by AIM-NT and experienced radiologists, with the results subsequently audited by radiology specialists and accuracy compared between groups. To assess alignment of artificial intelligence with radiologist workflow, discrepancies between the AIM-NT model and radiologists in NT plane identification time and thickness measurements were evaluated. Results The internal dataset included a total of 3959 NT images from 3153 fetuses, and the external dataset included 267 US videos from 267 fetuses. On the internal testing dataset, AIM-NT achieved an area under the receiver operating characteristic curve of 0.92 for NT plane identification. On the external testing dataset, there was no evidence of differences between AIM-NT and radiologists in NT plane identification accuracy (88.8% vs 87.6%, <i>P</i> = .69) or NT thickness measurements on standard and nonstandard NT planes (<i>P</i> = .29 and .59). AIM-NT demonstrated high consistency with radiologists in NT plane identification time, with 1-minute discrepancies observed in 77.9% of cases, and NT thickness measurements, with a mean difference of 0.03 mm and mean absolute error of 0.22 mm (95% CI: 0.19, 0.25). Conclusion AIM-NT demonstrated high accuracy in identifying the NT plane and measuring NT thickness on prenatal US images, showing minimal discrepancies with radiologist workflow. <b>Keywords:</b> Ultrasound, Fetus, Segmentation, Feature Detection, Diagnosis, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Horii in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240498"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143999114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RadioRAG: Online Retrieval-Augmented Generation for Radiology Question Answering. RadioRAG:放射学问答的在线检索增强生成。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240476
Soroosh Tayebi Arasteh, Mahshad Lotfinia, Keno Bressem, Robert Siepmann, Lisa Adams, Dyke Ferber, Christiane Kuhl, Jakob Nikolas Kather, Sven Nebelung, Daniel Truhn
{"title":"RadioRAG: Online Retrieval-Augmented Generation for Radiology Question Answering.","authors":"Soroosh Tayebi Arasteh, Mahshad Lotfinia, Keno Bressem, Robert Siepmann, Lisa Adams, Dyke Ferber, Christiane Kuhl, Jakob Nikolas Kather, Sven Nebelung, Daniel Truhn","doi":"10.1148/ryai.240476","DOIUrl":"10.1148/ryai.240476","url":null,"abstract":"<p><p>Purpose To evaluate diagnostic accuracy of various large language models (LLMs) when answering radiology-specific questions with and without access to additional online, up-to-date information via retrieval-augmented generation (RAG). Materials and Methods The authors developed radiology RAG (RadioRAG), an end-to-end framework that retrieves data from authoritative radiologic online sources in real-time. RAG incorporates information retrieval from external sources to supplement the initial prompt, grounding the model's response in relevant information. Using 80 questions from the RSNA Case Collection across radiologic subspecialties and 24 additional expert-curated questions with reference standard answers, LLMs (GPT-3.5-turbo [OpenAI], GPT-4, Mistral 7B, Mixtral 8×7B [Mistral], and Llama3-8B and -70B [Meta]) were prompted with and without RadioRAG in a zero-shot inference scenario (temperature ≤ 0.1, top-p = 1). RadioRAG retrieved context-specific information from <i>www.radiopaedia.org</i>. Accuracy of LLMs with and without RadioRAG in answering questions from each dataset was assessed. Statistical analyses were performed using bootstrapping while preserving pairing. Additional assessments included comparison of model with human performance and comparison of time required for conventional versus RadioRAG-powered question answering. Results RadioRAG improved accuracy for some LLMs, including GPT-3.5-turbo (74% [59 of 80] vs 66% [53 of 80], false discovery rate [FDR] = 0.03) and Mixtral 8×7B (76% [61 of 80] vs 65% [52 of 80], FDR = 0.02) on the RSNA radiology question answering (RSNA-RadioQA) dataset, with similar trends in the ExtendedQA dataset. Accuracy exceeded that of a human expert (63% [50 of 80], FDR ≤ 0.007) for these LLMs, although not for Mistral 7B-instruct-v0.2, Llama3-8B, and Llama3-70B (FDR ≥ 0.21). RadioRAG reduced hallucinations for all LLMs (rate, 6%-25%). RadioRAG increased estimated response time fourfold. Conclusion RadioRAG shows potential to improve LLM accuracy and factuality in radiology QA by integrating real-time, domain-specific data. <b>Keywords:</b> Retrieval-augmented Generation, Informatics, Computer-aided Diagnosis, Large Language Models <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240476"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12326075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving Federated Learning and Uncertainty Quantification in Medical Imaging. 医学影像中隐私保护的联邦学习和不确定性量化。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240637
Nikolas Koutsoubis, Asim Waqas, Yasin Yilmaz, Ravi P Ramachandran, Matthew B Schabath, Ghulam Rasool
{"title":"Privacy-preserving Federated Learning and Uncertainty Quantification in Medical Imaging.","authors":"Nikolas Koutsoubis, Asim Waqas, Yasin Yilmaz, Ravi P Ramachandran, Matthew B Schabath, Ghulam Rasool","doi":"10.1148/ryai.240637","DOIUrl":"10.1148/ryai.240637","url":null,"abstract":"<p><p>Artificial intelligence (AI) has demonstrated strong potential in automating medical imaging tasks, with potential applications across disease diagnosis, prognosis, treatment planning, and posttreatment surveillance. However, privacy concerns surrounding patient data remain a major barrier to the widespread adoption of AI in clinical practice, because large and diverse training datasets are essential for developing accurate, robust, and generalizable AI models. Federated learning offers a privacy-preserving solution by enabling collaborative model training across institutions without sharing sensitive data. Instead, model parameters, such as model weights, are exchanged between participating sites. Despite its potential, federated learning is still in its early stages of development and faces several challenges. Notably, sensitive information can still be inferred from the shared model parameters. Additionally, postdeployment data distribution shifts can degrade model performance, making uncertainty quantification essential. In federated learning, this task is particularly challenging due to data heterogeneity across participating sites. This review provides a comprehensive overview of federated learning, privacy-preserving federated learning, and uncertainty quantification in federated learning. Key limitations in current methodologies are identified, and future research directions are proposed to enhance data privacy and trustworthiness in medical imaging applications. <b>Keywords:</b> Supervised Learning, Perception, Neural Networks, Radiology-Pathology Integration <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240637"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Total Lung Volume from Pixel-Level Thickness Maps of Chest Radiographs Using Deep Learning. 利用深度学习从胸片像素级厚度图估计总肺容量。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240484
Tina Dorosti, Manuel Schultheiß, Philipp Schmette, Jule Heuchert, Johannes Thalhammer, Florian T Gassert, Thorsten Sellerer, Rafael Schick, Kirsten Taphorn, Korbinian Mechlem, Lorenz Birnbacher, Florian Schaff, Franz Pfeiffer, Daniela Pfeiffer
{"title":"Estimating Total Lung Volume from Pixel-Level Thickness Maps of Chest Radiographs Using Deep Learning.","authors":"Tina Dorosti, Manuel Schultheiß, Philipp Schmette, Jule Heuchert, Johannes Thalhammer, Florian T Gassert, Thorsten Sellerer, Rafael Schick, Kirsten Taphorn, Korbinian Mechlem, Lorenz Birnbacher, Florian Schaff, Franz Pfeiffer, Daniela Pfeiffer","doi":"10.1148/ryai.240484","DOIUrl":"10.1148/ryai.240484","url":null,"abstract":"<p><p>Purpose To estimate the total lung volume (TLV) from real and synthetic frontal chest radiographs on a pixel level using lung thickness maps generated by a U-Net deep learning model. Materials and Methods This retrospective study included 5959 chest CT scans from two public datasets, the Lung Nodule Analysis 2016 (Luna16) (<i>n</i> = 656) and the Radiological Society of North America Pulmonary Embolism Detection Challenge 2020 (<i>n</i> = 5303). Additionally, 72 participants were selected from the Klinikum Rechts der Isar dataset (October 2018 through December 2019), each with a corresponding chest radiograph obtained within 7 days. Synthetic radiographs and lung thickness maps were generated using forward projection of CT scans and their lung segmentations. A U-Net model was trained on synthetic radiographs to predict lung thickness maps and estimate TLV. Model performance was assessed using mean squared error (MSE), Pearson correlation coefficient, and two-sided Student <i>t</i> distribution. Results The study included 72 participants (45 male and 27 female participants; 33 healthy participants: mean age, 62 years [range, 34-80 years]; 39 with chronic obstructive pulmonary disease: mean age, 69 years [range, 47-91 years]). TLV predictions showed low error rates (MSE<sub>Public-Synthetic</sub>, 0.16 L<sup>2</sup>; MSE<sub>KRI-Synthetic</sub>, 0.20 L<sup>2</sup>; MSE<sub>KRI-Real</sub>, 0.35 L<sup>2</sup>) and strong correlations with CT-derived reference standard TLV (<i>n</i><sub>Public-Synthetic</sub>, 1191; <i>r</i> = 0.99; <i>P</i> < .001) (<i>n</i><sub>KRI-Synthetic</sub>, 72; <i>r</i> = 0.97; <i>P</i> < .001) (<i>n</i><sub>KRI-Real</sub>, 72; <i>r</i> = 0.91; <i>P</i> < .001). When evaluated on different datasets, the U-Net model achieved the highest performance for TLV estimation on the Luna16 test dataset, with the lowest MSE (0.09 L<sup>2</sup>) and strongest correlation (<i>r</i> = 0.99; <i>P</i> < .001) compared with CT-derived TLV. Conclusion The U-Net-generated pixel-level lung thickness maps successfully estimated TLV for both synthetic and real radiographs. <b>Keywords:</b> Frontal Chest Radiographs, Lung Thickness Map, Pixel-Level, Total Lung Volume, U-Net <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240484"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144162222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Duke Lung Cancer Screening (DLCS) Dataset: A Reference Dataset of Annotated Low-Dose Screening Thoracic CT. 杜克肺癌筛查(dlc)数据集:注释低剂量筛查胸部CT的参考数据集。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240248
Avivah J Wang, Fakrul Islam Tushar, Michael R Harowicz, Betty C Tong, Kyle J Lafata, Tina D Tailor, Joseph Y Lo
{"title":"The Duke Lung Cancer Screening (DLCS) Dataset: A Reference Dataset of Annotated Low-Dose Screening Thoracic CT.","authors":"Avivah J Wang, Fakrul Islam Tushar, Michael R Harowicz, Betty C Tong, Kyle J Lafata, Tina D Tailor, Joseph Y Lo","doi":"10.1148/ryai.240248","DOIUrl":"10.1148/ryai.240248","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240248"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319698/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence in Breast US Diagnosis and Report Generation. 人工智能在乳腺诊断和报告生成中的应用。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240625
Jian Wang, HongTian Tian, Xin Yang, HuaiYu Wu, XiLiang Zhu, RuSi Chen, Ao Chang, YanLin Chen, HaoRan Dou, RuoBing Huang, Jun Cheng, YongSong Zhou, Rui Gao, KeEn Yang, GuoQiu Li, Jing Chen, Dong Ni, JinFeng Xu, Ning Gu, FaJin Dong
{"title":"Artificial Intelligence in Breast US Diagnosis and Report Generation.","authors":"Jian Wang, HongTian Tian, Xin Yang, HuaiYu Wu, XiLiang Zhu, RuSi Chen, Ao Chang, YanLin Chen, HaoRan Dou, RuoBing Huang, Jun Cheng, YongSong Zhou, Rui Gao, KeEn Yang, GuoQiu Li, Jing Chen, Dong Ni, JinFeng Xu, Ning Gu, FaJin Dong","doi":"10.1148/ryai.240625","DOIUrl":"10.1148/ryai.240625","url":null,"abstract":"<p><p>Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast US reports. Materials and Methods This retrospective study included 104 364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82 896 cases, validated on 10 385 cases, and tested on an internal set (10 383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (each with >10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (with 7 years of experience), as well as reports from three junior radiologists (each with 2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for noninferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% (285 of 300) versus 92.33% (277 of 300) (<i>P</i> < .001, noninferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% (348 of 400) versus 90.75% (363 of 400) (<i>P</i> = .06), 86.50% (346 of 400) versus 92.00% (368 of 400) (<i>P</i> = .007), and 84.75% (339 of 400) versus 90.25% (361 of 400) (<i>P</i> = .02) without and with AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. <b>Keywords:</b> Neural Networks, Computer-aided Diagnosis, CAD, Ultrasound <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240625"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice. 放射学中大语言模型的检索增强生成:从理论到实践。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240790
Anna Fink, Alexander Rau, Marco Reisert, Fabian Bamberg, Maximilian F Russe
{"title":"Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice.","authors":"Anna Fink, Alexander Rau, Marco Reisert, Fabian Bamberg, Maximilian F Russe","doi":"10.1148/ryai.240790","DOIUrl":"10.1148/ryai.240790","url":null,"abstract":"<p><p>Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented generation (RAG)-based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical in order to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. <b>Keywords:</b> Artificial Intelligence, Deep Learning, Natural Language Processing, Tomography, x-Ray © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240790"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The BraTS-Africa Dataset: Expanding the Brain Tumor Segmentation Data to Capture African Populations. BraTS- africa数据集:扩展脑肿瘤分割(BraTS)数据以捕获非洲人口。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240528
Maruf Adewole, Jeffrey D Rudie, Anu Gbadamosi, Dong Zhang, Confidence Raymond, James Ajigbotoshso, Oluyemisi Toyobo, Kenneth Aguh, Olubukola Omidiji, Rachel Akinola, Mohammad Abba Suwaid, Adaobi Emegoakor, Nancy Ojo, Chinasa Kalaiwo, Gabriel Babatunde, Afolabi Ogunleye, Yewande Gbadamosi, Kator Iorpagher, Mayomi Onuwaje, Bamidele Betiku, Jasmine Cakmak, Björn Menze, Ujjwal Baid, Spyridon Bakas, Farouk Dako, Abiodun Fatade, Udunna C Anazodo
{"title":"The BraTS-Africa Dataset: Expanding the Brain Tumor Segmentation Data to Capture African Populations.","authors":"Maruf Adewole, Jeffrey D Rudie, Anu Gbadamosi, Dong Zhang, Confidence Raymond, James Ajigbotoshso, Oluyemisi Toyobo, Kenneth Aguh, Olubukola Omidiji, Rachel Akinola, Mohammad Abba Suwaid, Adaobi Emegoakor, Nancy Ojo, Chinasa Kalaiwo, Gabriel Babatunde, Afolabi Ogunleye, Yewande Gbadamosi, Kator Iorpagher, Mayomi Onuwaje, Bamidele Betiku, Jasmine Cakmak, Björn Menze, Ujjwal Baid, Spyridon Bakas, Farouk Dako, Abiodun Fatade, Udunna C Anazodo","doi":"10.1148/ryai.240528","DOIUrl":"10.1148/ryai.240528","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240528"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319694/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143989079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI. 扫描仪制造商、直肠内线圈使用和临床变量对多参数MRI深度学习辅助前列腺癌分类的影响。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.230555
José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou
{"title":"Impact of Scanner Manufacturer, Endorectal Coil Use, and Clinical Variables on Deep Learning-assisted Prostate Cancer Classification Using Multiparametric MRI.","authors":"José Guilherme de Almeida, Nuno M Rodrigues, Ana Sofia Castro Verde, Ana Mascarenhas Gaivão, Carlos Bilreiro, Inês Santiago, Joana Ip, Sara Belião, Celso Matos, Sara Silva, Manolis Tsiknakis, Kostantinos Marias, Daniele Regge, Nikolaos Papanikolaou","doi":"10.1148/ryai.230555","DOIUrl":"10.1148/ryai.230555","url":null,"abstract":"<p><p>Purpose To assess the effect of scanner manufacturer and scanning protocol on the performance of deep learning models to classify aggressiveness of prostate cancer (PCa) at biparametric MRI (bpMRI). Materials and Methods In this retrospective study, 5478 cases from ProstateNet, a PCa bpMRI dataset with examinations from 13 centers, were used to develop five deep learning (DL) models to predict PCa aggressiveness with minimal lesion information and test how using data from different subgroups-scanner manufacturers and endorectal coil (ERC) use (Siemens, Philips, GE with and without ERC, and the full dataset)-affects model performance. Performance was assessed using the area under the receiver operating characteristic curve (AUC). The effect of clinical features (age, prostate-specific antigen level, Prostate Imaging Reporting and Data System score) on model performance was also evaluated. Results DL models were trained on 4328 bpMRI cases, and the best model achieved an AUC of 0.73 when trained and tested using data from all manufacturers. Held-out test set performance was higher when models trained with data from a manufacturer were tested on the same manufacturer (within- and between-manufacturer AUC differences of 0.05 on average, <i>P</i> < .001). The addition of clinical features did not improve performance (<i>P</i> = .24). Learning curve analyses showed that performance remained stable as training data increased. Analysis of DL features showed that scanner manufacturer and scanning protocol heavily influenced feature distributions. Conclusion In automated classification of PCa aggressiveness using bpMRI data, scanner manufacturer and ERC use had a major effect on DL model performance and features. <b>Keywords:</b> Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD), Computer Applications-General (Informatics), Oncology <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Suri and Hsu in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230555"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural Language Processing for Everyone. 每个人的自然语言处理。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250218
Quirin D Strotzer
{"title":"Natural Language Processing for Everyone.","authors":"Quirin D Strotzer","doi":"10.1148/ryai.250218","DOIUrl":"10.1148/ryai.250218","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250218"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书