Radiology-Artificial Intelligence最新文献

筛选
英文 中文
High-Performance Open-Source AI for Breast Cancer Detection and Localization in MRI. 用于MRI乳腺癌检测与定位的高性能开源AI。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240550
Lukas Hirsch, Elizabeth J Sutton, Yu Huang, Beliz Kayis, Mary Hughes, Danny Martinez, Hernan A Makse, Lucas C Parra
{"title":"High-Performance Open-Source AI for Breast Cancer Detection and Localization in MRI.","authors":"Lukas Hirsch, Elizabeth J Sutton, Yu Huang, Beliz Kayis, Mary Hughes, Danny Martinez, Hernan A Makse, Lucas C Parra","doi":"10.1148/ryai.240550","DOIUrl":"10.1148/ryai.240550","url":null,"abstract":"<p><p>Purpose To develop and evaluate an open-source deep learning model for detection and localization of breast cancer on MRI scans. Materials and Methods In this retrospective study, a deep learning model for breast cancer detection and localization was trained on the largest breast MRI dataset to date. Data included all breast MRI examinations conducted at a tertiary cancer center in the United States between 2002 and 2019. The model was validated on sagittal MRI scans from the primary site (<i>n</i> = 6615 breasts). Generalizability was assessed by evaluating model performance on axial data from the primary site (<i>n</i> = 7058 breasts) and a second clinical site (<i>n</i> = 1840 breasts). Results The primary site dataset included 30 672 sagittal MRI examinations (52 598 breasts) from 9986 female patients (mean age, 52.1 years ± 11.2 [SD]). The model achieved an area under the receiver operating characteristic curve of 0.95 for detecting cancer in the primary site. At 90% specificity (5717 of 6353), model sensitivity was 83% (217 of 262), which was comparable to historical performance data for radiologists. The model generalized well to axial examinations, achieving an area under the receiver operating characteristic curve of 0.92 on data from the same clinical site and 0.92 on data from a secondary site. The model accurately located the tumor in 88.5% (232 of 262) of sagittal images, 92.8% (272 of 293) of axial images from the primary site, and 87.7% (807 of 920) of secondary site axial images. Conclusion The model demonstrated state-of-the-art performance on breast cancer detection. Code and weights are openly available to stimulate further development and validation. <b>Keywords:</b> Computer-aided Diagnosis (CAD), MRI, Neural Networks, Breast <i>Supplemental material is available for this article.</i> See also commentary by Moassefi and Xiao in this issue. © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240550"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464713/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144486216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sections Don't Lie: AI-driven Breast Cancer Detection Using MRI. 不要说谎:人工智能驱动的MRI乳腺癌检测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.250520
Mana Moassefi, Lekui Xiao
{"title":"Sections Don't Lie: AI-driven Breast Cancer Detection Using MRI.","authors":"Mana Moassefi, Lekui Xiao","doi":"10.1148/ryai.250520","DOIUrl":"10.1148/ryai.250520","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250520"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR-Transformer: A Vision Transformer-based Deep Learning Model for Total Knee Replacement Prediction Using MRI. MR-Transformer:一种基于视觉变压器的深度学习模型,用于MRI全膝关节置换术预测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240373
Chaojie Zhang, Shengjia Chen, Ozkan Cigdem, Haresh Rengaraj Rajamohan, Kyunghyun Cho, Richard Kijowski, Cem M Deniz
{"title":"MR-Transformer: A Vision Transformer-based Deep Learning Model for Total Knee Replacement Prediction Using MRI.","authors":"Chaojie Zhang, Shengjia Chen, Ozkan Cigdem, Haresh Rengaraj Rajamohan, Kyunghyun Cho, Richard Kijowski, Cem M Deniz","doi":"10.1148/ryai.240373","DOIUrl":"10.1148/ryai.240373","url":null,"abstract":"<p><p>Purpose To develop a transformer-based deep learning model-MR-Transformer-that leverages ImageNet pretraining and three-dimensional spatial correlations to predict the progression of knee osteoarthritis to total knee replacement using MRI. Materials and Methods This retrospective study included 353 case-control matched pairs of coronal intermediate-weighted turbo spin-echo (COR-IW-TSE) and sagittal intermediate-weighted turbo spin-echo with fat suppression (SAG-IW-TSE-FS) knee MRI scans from the Osteoarthritis Initiative database, with a follow-up period up to 9 years, and 270 case-control matched pairs of coronal short-tau inversion recovery (COR-STIR) and sagittal proton-density fat-saturated (SAG-PD-FAT-SAT) knee MRI scans from the Multicenter Osteoarthritis Study database, with a follow-up period up to 7 years. Performance of the MR-Transformer to predict the progression of knee osteoarthritis was compared with that of existing state-of-the-art deep learning models (TSE-Net, 3DMeT, and MRNet) using sevenfold nested cross-validation across the four MRI tissue sequences. Results Among the 353 Osteoarthritis Initiative case-control pairs, 215 were women (mean age, 63 years ± 8 [SD]); among the 270 Multicenter Osteoarthritis Study case-control pairs, 203 were women (mean age, 65 years ± 7). The MR-Transformer achieved areas under the receiver operating characteristic curve (AUCs) of 0.88 (95% CI: 0.85, 0.91), 0.88 (95% CI: 0.85, 0.90), 0.86 (95% CI: 0.82, 0.89), and 0.84 (95% CI: 0.81, 0.87) for COR-IW-TSE, SAG-IW-TSE-FS, COR-STIR, and SAG-PD-FAT-SAT, respectively. The model achieved a higher AUC than that of 3DMeT for all MRI sequences (<i>P</i> < .001). The model showed the highest sensitivity of 83% (95% CI: 78, 87) and specificity of 83% (95% CI: 76, 88) for the COR-IW-TSE MRI sequence. Conclusion Compared with the existing deep learning models, the MR-Transformer exhibited state-of-the-art performance in predicting the progression of knee osteoarthritis to total knee replacement using MRI scans. <b>Keywords:</b> MRI, Knee, Prognosis, Supervised Learning <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240373"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single Inspiratory Chest CT-based Generative Deep Learning Models to Evaluate Functional Small Airways Disease. 基于单吸气胸部ct的生成深度学习模型评估功能性小气道疾病。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240680
Di Zhang, Mingyue Zhao, Xiuxiu Zhou, Yiwei Li, Yu Guan, Yi Xia, Jin Zhang, Qi Dai, Jingfeng Zhang, Li Fan, S Kevin Zhou, Shiyuan Liu
{"title":"Single Inspiratory Chest CT-based Generative Deep Learning Models to Evaluate Functional Small Airways Disease.","authors":"Di Zhang, Mingyue Zhao, Xiuxiu Zhou, Yiwei Li, Yu Guan, Yi Xia, Jin Zhang, Qi Dai, Jingfeng Zhang, Li Fan, S Kevin Zhou, Shiyuan Liu","doi":"10.1148/ryai.240680","DOIUrl":"10.1148/ryai.240680","url":null,"abstract":"<p><p>Purpose To develop a deep learning model that uses a single inspiratory chest CT scan to perform parametric response mapping (PRM) and predict functional small airways disease (fSAD). Materials and Methods In this retrospective study, predictive and generative deep learning models for PRM using inspiratory chest CT were developed using a model development dataset with fivefold cross-validation, with PRM derived from paired respiratory CT as the reference standard. Voxelwise metrics, including sensitivity, area under the receiver operating characteristic curve (AUC), and structural similarity index measure, were used to evaluate model performance in predicting PRM and generating expiratory CT images. The best-performing model was tested on three internal test sets and an external test set. Results The model development dataset of 308 individuals (median age, 67 years [IQR: 62-70 years]; 113 female) was divided into the training set (<i>n</i> = 216), the internal validation set (<i>n</i> = 31), and the first internal test set (<i>n</i> = 61). The generative model outperformed the predictive model in detecting fSAD (sensitivity, 86.3% vs 38.9%; AUC, 0.86 vs 0.70). The generative model performed well in the second internal (AUCs of 0.64, 0.84, and 0.97 for emphysema, fSAD, and normal lung tissue, respectively), the third internal (AUCs of 0.63, 0.83, and 0.97), and the external (AUCs of 0.58, 0.85, and 0.94) test sets. Notably, the model exhibited exceptional performance in the preserved ratio impaired spirometry group of the fourth internal test set (AUCs of 0.62, 0.88, and 0.96). Conclusion The proposed generative model, using a single inspiratory CT scan, outperformed existing algorithms in PRM evaluation and achieved comparable results to paired respiratory CT. <b>Keywords:</b> CT, Lung, Chronic Obstructive Pulmonary Disease, Diagnosis, Reconstruction Algorithms, Deep Learning, Parametric Response Mapping, X-ray Computed Tomography, Small Airways <i>Supplemental material is available for this article.</i> © The Author(s) 2025. Published by the Radiological Society of North America under a CC BY 4.0 license. See also the commentary by Hathaway and Singh in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240680"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study. 优化联邦学习配置的MRI前列腺分割和癌症检测:模拟研究。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240485
Ashkan Moradi, Fadila Zerka, Joeran Sander Bosma, Mohammed R S Sunoqrot, Bendik S Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot
{"title":"Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study.","authors":"Ashkan Moradi, Fadila Zerka, Joeran Sander Bosma, Mohammed R S Sunoqrot, Bendik S Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot","doi":"10.1148/ryai.240485","DOIUrl":"10.1148/ryai.240485","url":null,"abstract":"<p><p>Purpose To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods A retrospective study was conducted using Flower FL (Flower.ai) to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MR images (four clients, 1294 patients) and csPCa detection using biparametric MR images (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. <i>P</i> values for performance differences were calculated using permutation testing. Results The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch (300 rounds) using FedMedian for prostate segmentation and 5 epochs (200 rounds) using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation (Dice score, increase from 0.73 ± 0.06 [SD] to 0.88 ± 0.03; <i>P</i> ≤ .01) and csPCa detection (PI-CAI score, increase from 0.63 ± 0.07 to 0.74 ± 0.06; <i>P</i> ≤ .01) on the independent test set. The optimized FL model showed higher lesion detection performance compared with the FL-baseline model (PI-CAI score, increase from 0.72 ± 0.06 to 0.74 ± 0.06; <i>P</i> ≤ .01), but no evidence of a difference was observed for prostate segmentation (Dice scores, 0.87 ± 0.03 vs 0.88 ± 03; <i>P</i> > .05). Conclusion FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance. <b>Keywords:</b> Federated Learning, Prostate Cancer, MRI, Cancer Detection, Deep Learning <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240485"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Early Detection of Chronic Obstructive Pulmonary Disease Using Generative AI. 利用生成式人工智能推进慢性阻塞性肺疾病的早期检测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.250555
Quincy A Hathaway, Yashbir Singh
{"title":"Advancing Early Detection of Chronic Obstructive Pulmonary Disease Using Generative AI.","authors":"Quincy A Hathaway, Yashbir Singh","doi":"10.1148/ryai.250555","DOIUrl":"10.1148/ryai.250555","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250555"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of Early Neoadjuvant Chemotherapy Response of Breast Cancer through Deep Learning-based Pharmacokinetic Quantification of DCE MRI. 基于深度学习的DCE MRI药代动力学量化预测乳腺癌早期新辅助化疗反应
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240769
Chaowei Wu, Lixia Wang, Nan Wang, Stephen Shiao, Tai Dou, Yin-Chen Hsu, Anthony G Christodoulou, Yibin Xie, Debiao Li
{"title":"Prediction of Early Neoadjuvant Chemotherapy Response of Breast Cancer through Deep Learning-based Pharmacokinetic Quantification of DCE MRI.","authors":"Chaowei Wu, Lixia Wang, Nan Wang, Stephen Shiao, Tai Dou, Yin-Chen Hsu, Anthony G Christodoulou, Yibin Xie, Debiao Li","doi":"10.1148/ryai.240769","DOIUrl":"10.1148/ryai.240769","url":null,"abstract":"<p><p>Purpose To improve the generalizability of pathologic complete response prediction following neoadjuvant chemotherapy using deep learning-based retrospective pharmacokinetic quantification of early treatment dynamic contrast-enhanced MRI. Materials and Methods This multicenter retrospective study included breast MRI data from four publicly available datasets of patients with breast cancer acquired from May 2002 to November 2016. Pharmacokinetic quantification was performed using a previously developed deep learning model for clinical multiphasic dynamic contrast-enhanced MRI datasets. Radiomic analysis was performed on pharmacokinetic quantification maps and conventional enhancement maps. These data, together with clinicopathologic variables and shape-based radiomic analysis, were subsequently applied for pathologic complete response prediction using logistic regression. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC). Results A total of 1073 female patients with breast cancer were included. The proposed method showed improved consistency and generalizability compared with the reference method, achieving higher AUC values across external datasets (0.82 [95% CI: 0.72, 0.91], 0.75 [95% CI: 0.71, 0.79], and 0.77 [95% CI: 0.66, 0.86] for datasets A2, B, and C, respectively). For dataset A2 (from the same study as the training dataset), there was no significant difference in performance between the proposed method and reference method (<i>P</i> = .80). Notably, on the combined external datasets, the proposed method significantly outperformed the reference method (AUC, 0.75 [95% CI: 0.72, 0.79] vs AUC, 0.71 [95% CI: 0.68, 0.76]; <i>P</i> = .003). Conclusion This work offers an approach to improve the generalizability and predictive accuracy of pathologic complete response for breast cancer across diverse datasets, achieving higher and more consistent AUC scores than existing methods. <b>Keywords:</b> Tumor Response, Breast, Prognosis, Dynamic Contrast-enhanced MRI <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Schnitzler in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240769"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Privacy-preserving Deep Learning Model with Homomorphic Encryption: A Technical Feasibility Study in Kidney CT Imaging. 基于同态加密的隐私保护深度学习模型的发展:肾脏CT成像技术可行性研究。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-08-27 DOI: 10.1148/ryai.240798
Sang-Wook Lee, Jongmin Choi, Min-Je Park, Hajin Kim, Soo-Heang Eo, Garam Lee, Sulgi Kim, Jungyo Suh
{"title":"Development of Privacy-preserving Deep Learning Model with Homomorphic Encryption: A Technical Feasibility Study in Kidney CT Imaging.","authors":"Sang-Wook Lee, Jongmin Choi, Min-Je Park, Hajin Kim, Soo-Heang Eo, Garam Lee, Sulgi Kim, Jungyo Suh","doi":"10.1148/ryai.240798","DOIUrl":"10.1148/ryai.240798","url":null,"abstract":"<p><p>Purpose To evaluate the technical feasibility of implementing homomorphic encryption in deep learning models for privacy-preserving CT image analysis of renal masses. Materials and Methods A privacy-preserving deep learning system was developed through three sequential technical phases: a reference CNN model (Ref-CNN) based on ResNet architecture, modification for encryption compatibility (Approx-CNN) by replacing ReLU with polynomial approximation and max-pooling with averagepooling, and implementation of fully homomorphic encryption (HE-CNN). The CKKS encryption scheme was used for its capability to perform arithmetic operations on encrypted real numbers. Using 12,446 CT images from a public dataset (3,709 renal cysts, 5,077 normal kidneys, and 2,283 kidney tumors), we evaluated model performance using area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve (AUPRC). Results All models demonstrated high diagnostic accuracy with AUC ranging from 0.89-0.99 and AUPRC from 0.67-0.99. The diagnostic performance trade-off was minimal from Ref-CNN to Approx-CNN (AUC: 0.99 to 0.97 for normal category), with no evidence of differences between models. However, encryption significantly increased storage and computational demands: a 256 × 256-pixel image expanded from 65KB to 32MB, requiring 50 minutes for CPU inference but only 90 seconds with GPU acceleration. Conclusion This technical development demonstrates that privacy-preserving deep learning inference using homomorphic encryption is feasible for renal mass classification on CT images, achieving comparable diagnostic performance while maintaining data privacy through end-to-end encryption. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240798"},"PeriodicalIF":13.2,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing the Impact of Training Data on Generalizability: Application in Deep Learning to Estimate Lung Nodule Malignancy Risk. 表征训练数据对泛化性的影响:在深度学习中估计肺结节恶性肿瘤风险的应用。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-08-20 DOI: 10.1148/ryai.240636
Bogdan Obreja, Joeran Bosma, Kiran Vaidhya Venkadesh, Zaigham Saghir, Mathias Prokop, Colin Jacobs
{"title":"Characterizing the Impact of Training Data on Generalizability: Application in Deep Learning to Estimate Lung Nodule Malignancy Risk.","authors":"Bogdan Obreja, Joeran Bosma, Kiran Vaidhya Venkadesh, Zaigham Saghir, Mathias Prokop, Colin Jacobs","doi":"10.1148/ryai.240636","DOIUrl":"10.1148/ryai.240636","url":null,"abstract":"<p><p>Purpose To investigate the relationship between training data volume and performance of a deep learning AI algorithm developed to assess the malignancy risk of pulmonary nodules detected on low-dose CT scans in lung cancer screening. Materials and Methods This retrospective study used a dataset of 16077 annotated nodules (1249 malignant, 14828 benign) from the National Lung Screening Trial (NLST) to systematically train an AI algorithm for pulmonary nodule malignancy risk prediction across various stratified subsets ranging from 1.25% to the full dataset. External testing was conducted using data from the Danish Lung Cancer Screening Trial (DLCST) to determine the amount of training data at which the performance of the AI was statistically non-inferior to the AI trained on the full NLST cohort. A size-matched cancer-enriched subset of DLCST, where each malignant nodule had been paired in diameter with the closest two benign nodules, was used to investigate the amount of training data at which the performance of the AI algorithm was statistically non-inferior to the average performance of 11 clinicians. Results The external testing set included 599 participants (mean age 57.65 (SD 4.84) for females and mean age 59.03 (SD 4.94) for males) with 883 nodules (65 malignant, 818 benign). The AI achieved a mean AUC of 0.92 [95% CI: 0.88, 0.96] on the DLCST cohort when trained on the full NLST dataset. Training with 80% of NLST data resulted in non-inferior performance (mean AUC 0.92 [95%CI: 0.89, 0.96], <i>P</i> = .005). On the size-matched DLCST subset (59 malignant, 118 benign), the AI reached non-inferior clinician-level performance (mean AUC 0.82 [95% CI: 0.77, 0.86]) with 20% of the training data (<i>P</i> = .02). Conclusion The deep learning AI algorithm demonstrated excellent performance in assessing pulmonary nodule malignancy risk, achieving clinical level performance with a fraction of the training data and reaching peak performance before utilizing the full dataset. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240636"},"PeriodicalIF":13.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MammosighTR: Nationwide Breast Cancer Screening Mammogram Dataset with BI-RADS Annotations for Artificial Intelligence Applications. MammosighTR:用于人工智能应用的BI-RADS注释的全国乳腺癌筛查乳房x线照片数据集。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-08-13 DOI: 10.1148/ryai.240841
Ural Koç, Emrah Karakaş, Ebru Akçapınar Sezer, Muhammed Said Beşler, Yaşar Alper Özkaya, Şehnaz Evrimler, Ahmet Yalçın, Hüseyin Alper Kızıloğlu, Uğur Kesimal, Meltem Oruç, İmran Çankaya, Duygu Koç Keleş, Neslihan Merd, Erdem Özkan, Numan İlteriş Çevik, Muhammet Batuhan Gökhan, Büşra Hayat, Mustafa Özer, Oğuzhan Tokur, Fatih Işık, Mehmet Alperen Tezcan, Muhammet Furkan Battal, Mecit Yüzkat, Nihat Barış Sebik, Fatih Karademir, Yasemin Topuz, Özgür Sezer, Songül Varlı, Erhan Akdoğan, Mustafa Mahir Ülgü, Şuayip Birinci
{"title":"MammosighTR: Nationwide Breast Cancer Screening Mammogram Dataset with BI-RADS Annotations for Artificial Intelligence Applications.","authors":"Ural Koç, Emrah Karakaş, Ebru Akçapınar Sezer, Muhammed Said Beşler, Yaşar Alper Özkaya, Şehnaz Evrimler, Ahmet Yalçın, Hüseyin Alper Kızıloğlu, Uğur Kesimal, Meltem Oruç, İmran Çankaya, Duygu Koç Keleş, Neslihan Merd, Erdem Özkan, Numan İlteriş Çevik, Muhammet Batuhan Gökhan, Büşra Hayat, Mustafa Özer, Oğuzhan Tokur, Fatih Işık, Mehmet Alperen Tezcan, Muhammet Furkan Battal, Mecit Yüzkat, Nihat Barış Sebik, Fatih Karademir, Yasemin Topuz, Özgür Sezer, Songül Varlı, Erhan Akdoğan, Mustafa Mahir Ülgü, Şuayip Birinci","doi":"10.1148/ryai.240841","DOIUrl":"10.1148/ryai.240841","url":null,"abstract":"<p><p>The MammosighTR dataset, derived from Türkiye's national breast cancer screening mammography program, provides BI-RADS-labeled mammograms with detailed annotations on breast composition and lesion quadrant location, which may be useful for developing and testing AI models in breast cancer detection. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240841"},"PeriodicalIF":13.2,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144838031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信