Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Structural MRI-based Computer-aided Diagnosis Models for Alzheimer Disease: Insights into Misclassifications and Diagnostic Limitations. 基于结构mri的阿尔茨海默病计算机辅助诊断模型:对错误分类和诊断局限性的见解。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-11-01 DOI: 10.1148/ryai.240508
Xiaopeng Kang, Jiaji Lin, Kun Zhao, Shaozhen Yan, Pindong Chen, Dawei Wang, Hongxiang Yao, Bo Zhou, Chunshui Yu, Pan Wang, Zhengluan Liao, Yan Chen, Xi Zhang, Ying Han, Jie Lu, Yong Liu
{"title":"Structural MRI-based Computer-aided Diagnosis Models for Alzheimer Disease: Insights into Misclassifications and Diagnostic Limitations.","authors":"Xiaopeng Kang, Jiaji Lin, Kun Zhao, Shaozhen Yan, Pindong Chen, Dawei Wang, Hongxiang Yao, Bo Zhou, Chunshui Yu, Pan Wang, Zhengluan Liao, Yan Chen, Xi Zhang, Ying Han, Jie Lu, Yong Liu","doi":"10.1148/ryai.240508","DOIUrl":"10.1148/ryai.240508","url":null,"abstract":"<p><p>Purpose To examine common patterns among different computer-aided diagnosis (CAD) models for Alzheimer disease (AD) using structural MRI data and to characterize the clinical and imaging features associated with their misclassifications. Materials and Methods This retrospective study used 3258 baseline structural MRI scans from five multisite datasets and two multidisease datasets collected between September 2005 and December 2019. The 3D Nested Hierarchical Transformer (3DNesT) model and other CAD techniques were used for AD classification using 10-fold cross-validation and cross-dataset validation. Subgroup analysis of CAD-misclassified individuals compared clinical and neuroimaging biomarkers using independent <i>t</i> tests with Bonferroni correction. Results This study included 1391 patients with AD (mean age, 72.1 years ± 9.2 [SD]; 757 female), 205 with other neurodegenerative diseases (mean age, 64.9 years ± 9.9; 117 male), and 1662 healthy controls (mean age, 70.6 years ± 7.6; 935 female). The 3DNesT model achieved 90.0% ± 2.3 cross-validation accuracy and 82.2%, 90.1%, and 91.6% accuracy in three external datasets. Further analysis suggested that the false-negative subgroup (<i>n</i> = 223) exhibited minimal atrophy and better cognitive performance on the Mini-Mental State Examination (MMSE) than the true-positive subgroup (MMSE score in false-negative subgroup, 21.4 ± 4.4; true-positive subgroup, 19.7 ± 5.7; <i>P</i> value family-wise error [<i>P<sub>FWE</sub></i>] < .001), despite displaying similar levels of amyloid β (false-negative subgroup, 705.9 pg/mL; true-positive subgroup, 665.7 pg/mL; <i>P<sub>FWE</sub></i> = .99) and tau (false-negative subgroup, 352.4 pg/mL; true-positive subgroup, 371.0 pg/mL; <i>P<sub>FWE</sub></i> = .99) burden. Conclusion A subgroup of patients with false-negative classification for Alzheimer disease exhibited atypical structural MRI patterns and clinical measures, fundamentally limiting the diagnostic performance of CAD models based solely on structural MRI. <b>Keywords:</b> MR Imaging, Dementia, Computer Applications-3D, Alzheimer's Disease, Computer-aided Diagnosis, Misclassification, Atypical AD <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Nasrallah in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240508"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Framework for Synthesizing Longitudinal Infant Brain MRI during Early Development. 婴儿早期发育纵向脑MRI的深度学习框架。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-11-01 DOI: 10.1148/ryai.240708
Yu Fang, Honglin Xiong, Jiawei Huang, Feihong Liu, Zhenrong Shen, Xinyi Cai, Han Zhang, Qian Wang
{"title":"A Deep Learning Framework for Synthesizing Longitudinal Infant Brain MRI during Early Development.","authors":"Yu Fang, Honglin Xiong, Jiawei Huang, Feihong Liu, Zhenrong Shen, Xinyi Cai, Han Zhang, Qian Wang","doi":"10.1148/ryai.240708","DOIUrl":"10.1148/ryai.240708","url":null,"abstract":"<p><p>Purpose To develop a three-stage, age- and modality-conditioned framework to synthesize longitudinal infant brain MRI scans and account for rapid structural and contrast changes during early brain development. Materials and Methods This retrospective study utilized T1- and T2-weighted MRI scans (848 in total) from 139 infants in the Baby Connectome Project, collected between September 2016 and May 2020. The framework models three critical image cues related: volumetric expansion, cortical folding, and myelination, predicting missing time points with age and modality as predictive factors. The method was compared with LGAN, CounterSyn, and a diffusion-based approach using peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and the Dice similarity coefficient (DSC). Results The framework was trained on 119 participants (mean age ± SD, 11.25 months ± 6.16; 60 female and 59 male infants) and tested on 20 participants (mean age, 12.98 months ± 6.59; 11 female and nine male infants). For T1-weighted images, PSNRs were 25.44 ± 1.95 and 26.93 ± 2.50 for forward and backward MRI synthesis, respectively, and SSIMs were 0.87 ± 0.03 and 0.90 ± 0.02, respectively. For T2-weighted images, PSNRs were 26.35 ± 2.30 and 26.40 ± 2.56, respectively, with SSIMs of 0.87 ± 0.03 and 0.89 ± 0.02, respectively, showing significant outperformance compared with competing methods (<i>P</i> < .001). The framework also excelled in tissue segmentation (<i>P</i> < .001) and cortical reconstruction, achieving a DSC of 0.85 for gray matter and 0.86 for white matter, with intraclass correlation coefficients exceeding 0.8 in most cortical regions. Conclusion The proposed three-stage framework effectively synthesized age-specific infant brain MRI scans, outperforming competing methods in image quality and tissue segmentation and with strong performance in cortical reconstruction, demonstrating potential for developmental modeling and longitudinal analyses. <b>Keywords:</b> Pediatrics, Brain, Brain Stem, MRI, Infant Brain MRI <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Chaudhari and Rauschecker in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240708"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Mammography Acquisition Parameters on AI and Radiologist Interpretive Performance. 乳房x线摄影采集参数对人工智能和放射科医生解释性能的影响。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-11-01 DOI: 10.1148/ryai.240861
William Lotter, Daniel S Hippe, Thomas Oshiro, Kathryn P Lowry, Hannah S Milch, Diana L Miglioretti, Joann G Elmore, Christoph I Lee, William Hsu
{"title":"Influence of Mammography Acquisition Parameters on AI and Radiologist Interpretive Performance.","authors":"William Lotter, Daniel S Hippe, Thomas Oshiro, Kathryn P Lowry, Hannah S Milch, Diana L Miglioretti, Joann G Elmore, Christoph I Lee, William Hsu","doi":"10.1148/ryai.240861","DOIUrl":"10.1148/ryai.240861","url":null,"abstract":"<p><p>Purpose To evaluate the impact of screening mammography acquisition parameters on the interpretive performance of artificial intelligence (AI) and radiologists. Materials and Methods The associations between seven mammogram acquisition parameters-mammography machine version, kilovoltage peak, x-ray exposure delivered, relative x-ray exposure, paddle size, compression force, and breast thickness-and AI and radiologist performance in interpreting two-dimensional screening mammograms acquired by a diverse health system between December 2010 and 2019 were retrospectively evaluated. The top 11 AI models and the ensemble model from the Digital Mammography Dialogue on Reverse Engineering Assessment and Methods (DREAM) Challenge were assessed. The associations between each acquisition parameter and the sensitivity and specificity of the AI models and the radiologists' interpretations were separately evaluated using generalized estimating equations-based models at the examination level, adjusted for several clinical factors. Results The dataset included 28 278 screening two-dimensional mammograms from 22 626 women (mean age ± SD, 58.5 years ± 11.5; 4913 women had multiple mammograms). Of these, 324 examinations resulted in a breast cancer diagnosis within 1 year. The acquisition parameters were significantly associated with the performance of both AI and radiologists, with absolute effect sizes reaching 10% for sensitivity and 5% for specificity; however, the associations differed between AI and radiologists for several parameters. Increased exposure delivered reduced the specificity for the ensemble AI (-4.5% per 1 SD increase; <i>P</i> < .001) but not radiologists (<i>P</i> = .44). Increased compression force reduced the specificity for radiologists (-1.3% per 1 SD increase; <i>P</i> < .001) but not for AI (<i>P</i> = .60). Conclusion Screening mammography acquisition parameters impacted the performance of both AI and radiologists, with some parameters impacting performance differently. <b>Keywords:</b> AI Robustness, Mammography, Medical Physics <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Lee and Bae in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240861"},"PeriodicalIF":13.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12649416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative Pharmacokinetic Mapping with AI: Toward More Generalizable Response Prediction in Breast Cancer MRI. 人工智能的定量药代动力学制图:在乳腺癌MRI中更普遍的反应预测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.250550
Tician Schnitzler
{"title":"Quantitative Pharmacokinetic Mapping with AI: Toward More Generalizable Response Prediction in Breast Cancer MRI.","authors":"Tician Schnitzler","doi":"10.1148/ryai.250550","DOIUrl":"10.1148/ryai.250550","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250550"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Integration of AI and Human Expertise to Improve Detection of Chest Radiograph Abnormalities. 人工智能与人类专业知识的协同集成以提高胸片异常的检测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240277
Akash Awasthi, Ngan Le, Zhigang Deng, Carol C Wu, Hien Van Nguyen
{"title":"Collaborative Integration of AI and Human Expertise to Improve Detection of Chest Radiograph Abnormalities.","authors":"Akash Awasthi, Ngan Le, Zhigang Deng, Carol C Wu, Hien Van Nguyen","doi":"10.1148/ryai.240277","DOIUrl":"10.1148/ryai.240277","url":null,"abstract":"<p><p>Purpose To develop a collaborative artificial intelligence (AI) system that integrates eye gaze data and radiology reports to improve diagnostic accuracy in chest radiograph interpretation by identifying and correcting perceptual errors. Materials and Methods This retrospective study used public datasets REFLACX (Reports and Eye-Tracking Data for Localization of Abnormalities in Chest X-rays) and EGD-CXR (Eye Gaze Data for Chest X-rays) to develop a collaborative AI solution, named Collaborative Radiology Expert (CoRaX). It uses a large multimodal model to analyze image embeddings, eye gaze data, and radiology reports, aiming to rectify perceptual errors in chest radiology. The proposed system was evaluated using two simulated error datasets featuring random and uncertain alterations of five abnormalities. Evaluation focused on the system's referral-making process, the quality of referrals, and its performance within collaborative diagnostic settings. Results In the random masking-based error dataset, 28.0% (93 of 332) of abnormalities were altered. The system successfully corrected 21.3% (71 of 332) of these errors, with 6.6% (22 of 332) remaining unresolved. The accuracy of the system in identifying the correct regions of interest for missed abnormalities was 63.0% (95% CI: 59.0, 68.0), and 85.7% (240 of 280) of interactions with radiologists were deemed satisfactory, meaning that the system provided diagnostic aid to radiologists. In the uncertainty-masking-based error dataset, 43.9% (146 of 332) of abnormalities were altered. The system corrected 34.6% (115 of 332) of these errors, with 9.3% (31 of 332) unresolved. The accuracy of predicted regions of missed abnormalities for this dataset was 58.0% (95% CI: 55.0, 62.0), and 78.4% (233 of 297) of interactions were satisfactory. Conclusion The CoRaX system can collaborate efficiently with radiologists and address perceptual errors across various abnormalities in chest radiographs. <b>Keywords:</b> Perception, Convolutional Neural Network (CNN), Deep Learning Algorithms, Radiology-Pathology Integration, Unsupervised Learning, CoRaX, Perceptual Error, Referral, Deferral <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Levi and Laghi in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240277"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"You'll Never Look Alone": Embedding Second-Look AI into the Radiologist's Workflow. “你永远不会孤单”:将第二眼人工智能嵌入放射科医生的工作流程。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.250575
Riccardo Levi, Andrea Laghi
{"title":"\"You'll Never Look Alone\": Embedding Second-Look AI into the Radiologist's Workflow.","authors":"Riccardo Levi, Andrea Laghi","doi":"10.1148/ryai.250575","DOIUrl":"10.1148/ryai.250575","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250575"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI. 多序列MRI对腰椎棘旁肌肉自动分割和组成分析的多中心验证。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240833
Zhongyi Zhang, Julie A Hides, Enrico De Martino, Janet R Millner, Gervase Tuxworth
{"title":"Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI.","authors":"Zhongyi Zhang, Julie A Hides, Enrico De Martino, Janet R Millner, Gervase Tuxworth","doi":"10.1148/ryai.240833","DOIUrl":"10.1148/ryai.240833","url":null,"abstract":"<p><p>Chronic low back pain is a global health issue with considerable socioeconomic burdens and is associated with changes in lumbar paraspinal muscles (LPMs). In this retrospective study, a deep learning method was trained and externally validated for automated LPM segmentation, muscle volume quantification, and fatty infiltration assessment across multisequence MR images. A total of 1302 MR images from 641 participants across five centers were included. Data from two centers were used for model training and tuning, while data from the remaining three centers were used for external testing. Model segmentation performance was evaluated against manual segmentation using the Dice similarity coefficient (DSC), and measurement accuracy was assessed using two one-sided tests and intraclass correlation coefficients (ICCs). The model achieved global DSC values of 0.98 on the internal test set and 0.93 to 0.97 on external test sets. Statistical equivalence between automated and manual measurements of muscle volume and fat ratio was confirmed in most regions (<i>P</i> < .05). Agreement between automated and manual measurements was high (ICCs > 0.92). In conclusion, the proposed automated method accurately segmented LPM and demonstrated statistical equivalence to manual measurements of muscle volume and fatty infiltration ratio across multisequence, multicenter MR images. <b>Keywords:</b> MR-Imaging, Muscular, Volume Analysis, Segmentation, Vision, Application Domain, Quantification, Supervised Learning Type of Machine Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240833"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Performance Open-Source AI for Breast Cancer Detection and Localization in MRI. 用于MRI乳腺癌检测与定位的高性能开源AI。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240550
Lukas Hirsch, Elizabeth J Sutton, Yu Huang, Beliz Kayis, Mary Hughes, Danny Martinez, Hernan A Makse, Lucas C Parra
{"title":"High-Performance Open-Source AI for Breast Cancer Detection and Localization in MRI.","authors":"Lukas Hirsch, Elizabeth J Sutton, Yu Huang, Beliz Kayis, Mary Hughes, Danny Martinez, Hernan A Makse, Lucas C Parra","doi":"10.1148/ryai.240550","DOIUrl":"10.1148/ryai.240550","url":null,"abstract":"<p><p>Purpose To develop and evaluate an open-source deep learning model for detection and localization of breast cancer on MRI scans. Materials and Methods In this retrospective study, a deep learning model for breast cancer detection and localization was trained on the largest breast MRI dataset to date. Data included all breast MRI examinations conducted at a tertiary cancer center in the United States between 2002 and 2019. The model was validated on sagittal MRI scans from the primary site (<i>n</i> = 6615 breasts). Generalizability was assessed by evaluating model performance on axial data from the primary site (<i>n</i> = 7058 breasts) and a second clinical site (<i>n</i> = 1840 breasts). Results The primary site dataset included 30 672 sagittal MRI examinations (52 598 breasts) from 9986 female patients (mean age, 52.1 years ± 11.2 [SD]). The model achieved an area under the receiver operating characteristic curve of 0.95 for detecting cancer in the primary site. At 90% specificity (5717 of 6353), model sensitivity was 83% (217 of 262), which was comparable to historical performance data for radiologists. The model generalized well to axial examinations, achieving an area under the receiver operating characteristic curve of 0.92 on data from the same clinical site and 0.92 on data from a secondary site. The model accurately located the tumor in 88.5% (232 of 262) of sagittal images, 92.8% (272 of 293) of axial images from the primary site, and 87.7% (807 of 920) of secondary site axial images. Conclusion The model demonstrated state-of-the-art performance on breast cancer detection. Code and weights are openly available to stimulate further development and validation. <b>Keywords:</b> Computer-aided Diagnosis (CAD), MRI, Neural Networks, Breast <i>Supplemental material is available for this article.</i> See also commentary by Moassefi and Xiao in this issue. © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240550"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464713/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144486216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR-Transformer: A Vision Transformer-based Deep Learning Model for Total Knee Replacement Prediction Using MRI. MR-Transformer:一种基于视觉变压器的深度学习模型,用于MRI全膝关节置换术预测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240373
Chaojie Zhang, Shengjia Chen, Ozkan Cigdem, Haresh Rengaraj Rajamohan, Kyunghyun Cho, Richard Kijowski, Cem M Deniz
{"title":"MR-Transformer: A Vision Transformer-based Deep Learning Model for Total Knee Replacement Prediction Using MRI.","authors":"Chaojie Zhang, Shengjia Chen, Ozkan Cigdem, Haresh Rengaraj Rajamohan, Kyunghyun Cho, Richard Kijowski, Cem M Deniz","doi":"10.1148/ryai.240373","DOIUrl":"10.1148/ryai.240373","url":null,"abstract":"<p><p>Purpose To develop a transformer-based deep learning model-MR-Transformer-that leverages ImageNet pretraining and three-dimensional spatial correlations to predict the progression of knee osteoarthritis to total knee replacement using MRI. Materials and Methods This retrospective study included 353 case-control matched pairs of coronal intermediate-weighted turbo spin-echo (COR-IW-TSE) and sagittal intermediate-weighted turbo spin-echo with fat suppression (SAG-IW-TSE-FS) knee MRI scans from the Osteoarthritis Initiative database, with a follow-up period up to 9 years, and 270 case-control matched pairs of coronal short-tau inversion recovery (COR-STIR) and sagittal proton-density fat-saturated (SAG-PD-FAT-SAT) knee MRI scans from the Multicenter Osteoarthritis Study database, with a follow-up period up to 7 years. Performance of the MR-Transformer to predict the progression of knee osteoarthritis was compared with that of existing state-of-the-art deep learning models (TSE-Net, 3DMeT, and MRNet) using sevenfold nested cross-validation across the four MRI tissue sequences. Results Among the 353 Osteoarthritis Initiative case-control pairs, 215 were women (mean age, 63 years ± 8 [SD]); among the 270 Multicenter Osteoarthritis Study case-control pairs, 203 were women (mean age, 65 years ± 7). The MR-Transformer achieved areas under the receiver operating characteristic curve (AUCs) of 0.88 (95% CI: 0.85, 0.91), 0.88 (95% CI: 0.85, 0.90), 0.86 (95% CI: 0.82, 0.89), and 0.84 (95% CI: 0.81, 0.87) for COR-IW-TSE, SAG-IW-TSE-FS, COR-STIR, and SAG-PD-FAT-SAT, respectively. The model achieved a higher AUC than that of 3DMeT for all MRI sequences (<i>P</i> < .001). The model showed the highest sensitivity of 83% (95% CI: 78, 87) and specificity of 83% (95% CI: 76, 88) for the COR-IW-TSE MRI sequence. Conclusion Compared with the existing deep learning models, the MR-Transformer exhibited state-of-the-art performance in predicting the progression of knee osteoarthritis to total knee replacement using MRI scans. <b>Keywords:</b> MRI, Knee, Prognosis, Supervised Learning <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240373"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sections Don't Lie: AI-driven Breast Cancer Detection Using MRI. 不要说谎:人工智能驱动的MRI乳腺癌检测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.250520
Mana Moassefi, Lekui Xiao
{"title":"Sections Don't Lie: AI-driven Breast Cancer Detection Using MRI.","authors":"Mana Moassefi, Lekui Xiao","doi":"10.1148/ryai.250520","DOIUrl":"10.1148/ryai.250520","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250520"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144790145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书