Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Distinguishing between Rigor and Transparency in FDA Marketing Authorization of AI-enabled Medical Devices. 区分FDA对人工智能医疗器械上市授权的严谨性和透明度。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-24 DOI: 10.1148/ryai.250369
Abdul Rahman Diab, William Lotter
{"title":"Distinguishing between Rigor and Transparency in FDA Marketing Authorization of AI-enabled Medical Devices.","authors":"Abdul Rahman Diab, William Lotter","doi":"10.1148/ryai.250369","DOIUrl":"https://doi.org/10.1148/ryai.250369","url":null,"abstract":"<p><p>The increasing prevalence of AI-enabled medical devices presents significant opportunities for improving patient outcomes. However, recent studies based on public FDA summaries have raised concerns about the extent of validation that such devices undergo before FDA marketing authorization and subsequent clinical deployment. Here, we clarify key concepts of FDA regulation and provide insights into the current standards of performance validation, focusing on radiology AI devices. We distinguish between two fundamentally different but often conflated concepts: validation rigor-the quality and comprehensiveness of the evidence supporting a device's performance-and validation transparency-the extent to which this evidence is publicly accessible. We begin by describing the inverse relationship between the amount of performance data contained and the transparency of specific components of an FDA submission. Drawing on FDA guidelines and on our own experience developing authorized AI devices, we then outline current validation standards and present a mapping from common radiology AI device types to their typical clinical study designs. We conclude with actionable recommendations, advocating for a balanced approach tailored to specific use cases while still enforcing certain universal standards. These measures will help ensure that AI-enabled medical devices are both rigorously evaluated and transparently reported, thereby fostering greater public trust and enhancing clinical utility. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e250369"},"PeriodicalIF":13.2,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145132123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DLMUSE: Robust Brain Segmentation in Seconds Using Deep Learning. DLMUSE:使用深度学习在几秒钟内进行稳健的大脑分割。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-17 DOI: 10.1148/ryai.240299
Vishnu M Bashyam, Guray Erus, Yuhan Cui, Di Wu, Gyujoon Hwang, Alexander Getka, Ashish Singh, George Aidinis, Kyunglok Baik, Randa Melhem, Elizabeth Mamourian, Jimit Doshi, Ashwini Davison, Ilya M Nasrallah, Christos Davatzikos
{"title":"DLMUSE: Robust Brain Segmentation in Seconds Using Deep Learning.","authors":"Vishnu M Bashyam, Guray Erus, Yuhan Cui, Di Wu, Gyujoon Hwang, Alexander Getka, Ashish Singh, George Aidinis, Kyunglok Baik, Randa Melhem, Elizabeth Mamourian, Jimit Doshi, Ashwini Davison, Ilya M Nasrallah, Christos Davatzikos","doi":"10.1148/ryai.240299","DOIUrl":"10.1148/ryai.240299","url":null,"abstract":"<p><p>Purpose To introduce an open-source deep learning brain segmentation model for fully automated brain MRI segmentation, enabling rapid segmentation and facilitating large-scale neuroimaging research. Materials and Methods In this retrospective study, a deep learning model was developed using a diverse training dataset of 1900 MRI scans (ages 24-93 with a mean of 65 years (SD: 11.5 years) and 1007 females and 893 males) with reference labels generated using a multiatlas segmentation method with human supervision. The final model was validated using 71391 scans from 14 studies. Segmentation quality was assessed using Dice similarity and Pearson correlation coefficients with reference segmentations. Downstream predictive performance for brain age and Alzheimer's disease was evaluated by fitting machine learning models. Statistical significance was assessed using Mann-Whittney U and McNemar's tests. Results The DLMUSE model achieved high correlation (r = 0.93-0.95) and agreement (median Dice scores = 0.84-0.89) with reference segmentations across the testing dataset. Prediction of brain age using DLMUSE features achieved a mean absolute error of 5.08 years, similar to that of the reference method (5.15 years, <i>P</i> = .56). Classification of Alzheimer's disease using DLMUSE features achieved an accuracy of 89% and F1-score of 0.80, which were comparable to values achieved by the reference method (89% and 0.79, respectively). DLMUSE segmentation speed was over 10000 times faster than that of the reference method (3.5 seconds vs 14 hours). Conclusion DLMUSE enabled rapid brain MRI segmentation, with performance comparable to that of state-of-theart methods across diverse datasets. The resulting open-source tools and user-friendly web interface can facilitate large-scale neuroimaging research and wide utilization of advanced segmentation methods. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240299"},"PeriodicalIF":13.2,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Mammography Acquisition Parameters on AI and Radiologist Interpretive Performance. 乳房x线摄影采集参数对人工智能和放射科医生解释性能的影响。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-17 DOI: 10.1148/ryai.240861
William Lotter, Daniel S Hippe, Thomas Oshiro, Kathryn P Lowry, Hannah S Milch, Diana L Miglioretti, Joann G Elmore, Christoph I Lee, William Hsu
{"title":"Influence of Mammography Acquisition Parameters on AI and Radiologist Interpretive Performance.","authors":"William Lotter, Daniel S Hippe, Thomas Oshiro, Kathryn P Lowry, Hannah S Milch, Diana L Miglioretti, Joann G Elmore, Christoph I Lee, William Hsu","doi":"10.1148/ryai.240861","DOIUrl":"10.1148/ryai.240861","url":null,"abstract":"<p><p>Purpose To evaluate the impact of screening mammography acquisition parameters on the interpretive performance of AI and radiologists. Materials and Methods The associations between seven mammogram acquisition parameters-mammography machine version, kVp, x-ray exposure delivered, relative x-ray exposure, paddle size, compression force, and breast thickness-and AI and radiologist performance in interpreting two-dimensional screening mammograms acquired by a diverse health system between December 2010 and 2019 were retrospectively evaluated. The top 11 AI models and the ensemble model from the Digital Mammography DREAM Challenge were assessed. The associations between each acquisition parameter and the sensitivity and specificity of the AI models and the radiologists' interpretations were separately evaluated using generalized estimating equations-based models at the examination level, adjusted for several clinical factors. Results The dataset included 28,278 screening two-dimensional mammograms from 22,626 women (mean age 58.5 years ± 11.5 [SD]; 4913 women had multiple mammograms). Of these, 324 examinations resulted in breast cancer diagnosis within 1 year. The acquisition parameters were significantly associated with the performance of both AI and radiologists, with absolute effect sizes reaching 10% for sensitivity and 5% for specificity; however, the associations differed between AI and radiologists for several parameters. Increased exposure delivered reduced the specificity for the ensemble AI (-4.5% per 1 SD increase; <i>P</i> < .001) but not radiologists (<i>P</i> = .44). Increased compression force reduced the specificity for radiologists (-1.3% per 1 SD increase; <i>P</i> < .001) but not for AI (<i>P</i> = .60). Conclusion Screening mammography acquisition parameters impacted the performance of both AI and radiologists, with some parameters impacting performance differently. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240861"},"PeriodicalIF":13.2,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Framework for Synthesizing Longitudinal Infant Brain MRI during Early Development. 婴儿早期发育纵向脑MRI的深度学习框架。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-17 DOI: 10.1148/ryai.240708
Yu Fang, Honglin Xiong, Jiawei Huang, Feihong Liu, Zhenrong Shen, Xinyi Cai, Han Zhang, Qian Wang
{"title":"A Deep Learning Framework for Synthesizing Longitudinal Infant Brain MRI during Early Development.","authors":"Yu Fang, Honglin Xiong, Jiawei Huang, Feihong Liu, Zhenrong Shen, Xinyi Cai, Han Zhang, Qian Wang","doi":"10.1148/ryai.240708","DOIUrl":"10.1148/ryai.240708","url":null,"abstract":"<p><p>Purpose To develop a three-stage, age-and modality-conditioned framework to synthesize longitudinal infant brain MRI scans, and account for rapid structural and contrast changes during early brain development. Materials and Methods This retrospective study used T1- and T2-weighted MRI scans (848 scans) from 139 infants in the Baby Connectome Project, collected since September 2016. The framework models three critical image cues related: volumetric expansion, cortical folding, and myelination, predicting missing time points with age and modality as predictive factors. The method was compared with LGAN, CounterSyn, and Diffusion-based approach using peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and the Dice similarity coefficient (DSC). Results The framework was trained on 119 participants (average age: 11.25 ± 6.16 months, 60 female, 59 male) and tested on 20 (average age: 12.98 ± 6.59 months, 11 female, 9 male). For T1-weighted images, PSNRs were 25.44 ± 1.95 and 26.93 ± 2.50 for forward and backward MRI synthesis, and SSIMs of 0.87 ± 0.03 and 0.90 ± 0.02. For T2-weighted images, PSNRs were 26.35 ± 2.30 and 26.40 ± 2.56, with SSIMs of 0.87 ± 0.03 and 0.89 ± 0.02, significantly outperforming competing methods (<i>P</i> < .001). The framework also excelled in tissue segmentation (<i>P</i> < .001) and cortical reconstruction, achieving DSC of 0.85 for gray matter and 0.86 for white matter, with intraclass correlation coefficients exceeding 0.8 in most cortical regions. Conclusion The proposed three-stage framework effectively synthesized age-specific infant brain MRI scans, outperforming competing methods in image quality and tissue segmentation with strong performance in cortical reconstruction, demonstrating potential for developmental modeling and longitudinal analyses. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240708"},"PeriodicalIF":13.2,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Explainable Deep Learning Model for Focal Liver Lesion Diagnosis Using Multiparametric MRI. 多参数MRI诊断局灶性肝脏病变的可解释深度学习模型。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-10 DOI: 10.1148/ryai.240531
Zhehan Shen, Lingzhi Chen, Lilong Wang, Shunjie Dong, Fakai Wang, Yaning Pan, Jiahao Zhou, Yikun Wang, Xinxin Xu, Huanhuan Chong, Huimin Lin, Weixia Li, Ruokun Li, Haihong Ma, Jing Ma, Yixing Yu, Lianjun Du, Xiaosong Wang, Shaoting Zhang, Fuhua Yan
{"title":"An Explainable Deep Learning Model for Focal Liver Lesion Diagnosis Using Multiparametric MRI.","authors":"Zhehan Shen, Lingzhi Chen, Lilong Wang, Shunjie Dong, Fakai Wang, Yaning Pan, Jiahao Zhou, Yikun Wang, Xinxin Xu, Huanhuan Chong, Huimin Lin, Weixia Li, Ruokun Li, Haihong Ma, Jing Ma, Yixing Yu, Lianjun Du, Xiaosong Wang, Shaoting Zhang, Fuhua Yan","doi":"10.1148/ryai.240531","DOIUrl":"10.1148/ryai.240531","url":null,"abstract":"<p><p>Purpose To assess the effectiveness of an explainable deep learning (DL) model, developed using multiparametric MRI (mpMRI) features, in improving diagnostic accuracy and efficiency of radiologists for classification of focal liver lesions (FLLs). Materials and Methods FLLs ≥ 1 cm in diameter at mpMRI were included in the study. nn-Unet and Liver Imaging Feature Transformer (LIFT) models were developed using retrospective data from one hospital (January 2018-August 2023). nnU-Net was used for lesion segmentation and LIFT for FLL classification. External testing was performed on data from three hospitals (January 2018-December 2023), with a prospective test set obtained from January 2024 to April 2024. Model performance was compared with radiologists and impact of model assistance on junior and senior radiologist performance was assessed. Evaluation metrics included the Dice similarity coefficient (DSC) and accuracy. Results A total of 2131 individuals with FLLs (mean age, 56 ± [SD] 12 years; 1476 female) were included in the training, internal test, external test, and prospective test sets. Average DSC values for liver and tumor segmentation across the three test sets were 0.98 and 0.96, respectively. Average accuracy for features and lesion classification across the three test sets were 93% and 97%, respectively. LIFT-assisted readings improved diagnostic accuracy (average 5.3% increase, <i>P</i> < .001), reduced reading time (average 34.5 seconds decrease, <i>P</i> < .001), and enhanced confidence (average 0.3-point increase, <i>P</i> < .001) of junior radiologists. Conclusion The proposed DL model accurately detected and classified FLLs, improving diagnostic accuracy and efficiency of junior radiologists. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240531"},"PeriodicalIF":13.2,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Explainable AI to Characterize Features in the Mirai Mammographic Breast Cancer Risk Prediction Model. 使用可解释的AI来描述Mirai乳房x线摄影乳腺癌风险预测模型中的特征。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-03 DOI: 10.1148/ryai.240417
Yao-Kuan Wang, Zan Klanecek, Tobias Wagner, Lesley Cockmartin, Nicholas Marshall, Andrej Studen, Robert Jeraj, Hilde Bosmans
{"title":"Using Explainable AI to Characterize Features in the Mirai Mammographic Breast Cancer Risk Prediction Model.","authors":"Yao-Kuan Wang, Zan Klanecek, Tobias Wagner, Lesley Cockmartin, Nicholas Marshall, Andrej Studen, Robert Jeraj, Hilde Bosmans","doi":"10.1148/ryai.240417","DOIUrl":"10.1148/ryai.240417","url":null,"abstract":"<p><p>Purpose To evaluate whether features extracted by Mirai can be aligned with mammographic observations, and contribute meaningfully to the prediction. Materials and Methods This retrospective study examined the correlation of 512 Mirai features with mammographic observations in terms of receptive field and anatomic location. A total of 29,374 screening examinations with mammograms (10,415 women, mean age at examination 60 [SD: 11] years) from the EMBED Dataset (2013-2020) were used to evaluate feature importance using a feature-centric explainable AI pipeline. Risk prediction was evaluated using only calcification features (CalcMirai) or mass features (MassMirai) against Mirai. Performance was assessed in screening and screen-negative (time-to-cancer > 6 months) populations using the area under the receiver operating characteristic curve (AUC). Results Eighteen calcification features and 18 mass features were selected for CalcMirai and MassMirai, respectively. Both CalcMirai and MassMirai had lower performance than Mirai in lesion detection (screening population, 1-year AUC: Mirai, 0.81 [95% CI: 0.78, 0.84], CalcMirai, 0.76 [95% CI: 0.73, 0.80]; MassMirai, 0.74 [95% CI: 0.71, 0.78]; <i>P</i> values < 0.001). In risk prediction, there was no evidence of a difference in performance between CalcMirai and Mirai (screen-negative population, 5-year AUC: Mirai, 0.66 [95% CI: 0.63, 0.69], CalcMirai, 0.66 [95% CI: 0.64, 0.69]; <i>P</i> value: 0.71); however, MassMirai achieved lower performance than Mirai (AUC, 0.57 [95% CI: 0.54, 0.60]; <i>P</i> value < .001). Radiologist review of calcification features confirmed Mirai's use of benign calcification in risk prediction. Conclusion The explainable AI pipeline demonstrated that Mirai implicitly learned to identify mammographic lesion features, particularly calcifications, for lesion detection and risk prediction. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240417"},"PeriodicalIF":13.2,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative Pharmacokinetic Mapping with AI: Toward More Generalizable Response Prediction in Breast Cancer MRI. 人工智能的定量药代动力学制图:在乳腺癌MRI中更普遍的反应预测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.250550
Tician Schnitzler
{"title":"Quantitative Pharmacokinetic Mapping with AI: Toward More Generalizable Response Prediction in Breast Cancer MRI.","authors":"Tician Schnitzler","doi":"10.1148/ryai.250550","DOIUrl":"10.1148/ryai.250550","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250550"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Integration of AI and Human Expertise to Improve Detection of Chest Radiograph Abnormalities. 人工智能与人类专业知识的协同集成以提高胸片异常的检测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240277
Akash Awasthi, Ngan Le, Zhigang Deng, Carol C Wu, Hien Van Nguyen
{"title":"Collaborative Integration of AI and Human Expertise to Improve Detection of Chest Radiograph Abnormalities.","authors":"Akash Awasthi, Ngan Le, Zhigang Deng, Carol C Wu, Hien Van Nguyen","doi":"10.1148/ryai.240277","DOIUrl":"10.1148/ryai.240277","url":null,"abstract":"<p><p>Purpose To develop a collaborative artificial intelligence (AI) system that integrates eye gaze data and radiology reports to improve diagnostic accuracy in chest radiograph interpretation by identifying and correcting perceptual errors. Materials and Methods This retrospective study used public datasets REFLACX (Reports and Eye-Tracking Data for Localization of Abnormalities in Chest X-rays) and EGD-CXR (Eye Gaze Data for Chest X-rays) to develop a collaborative AI solution, named Collaborative Radiology Expert (CoRaX). It uses a large multimodal model to analyze image embeddings, eye gaze data, and radiology reports, aiming to rectify perceptual errors in chest radiology. The proposed system was evaluated using two simulated error datasets featuring random and uncertain alterations of five abnormalities. Evaluation focused on the system's referral-making process, the quality of referrals, and its performance within collaborative diagnostic settings. Results In the random masking-based error dataset, 28.0% (93 of 332) of abnormalities were altered. The system successfully corrected 21.3% (71 of 332) of these errors, with 6.6% (22 of 332) remaining unresolved. The accuracy of the system in identifying the correct regions of interest for missed abnormalities was 63.0% (95% CI: 59.0, 68.0), and 85.7% (240 of 280) of interactions with radiologists were deemed satisfactory, meaning that the system provided diagnostic aid to radiologists. In the uncertainty-masking-based error dataset, 43.9% (146 of 332) of abnormalities were altered. The system corrected 34.6% (115 of 332) of these errors, with 9.3% (31 of 332) unresolved. The accuracy of predicted regions of missed abnormalities for this dataset was 58.0% (95% CI: 55.0, 62.0), and 78.4% (233 of 297) of interactions were satisfactory. Conclusion The CoRaX system can collaborate efficiently with radiologists and address perceptual errors across various abnormalities in chest radiographs. <b>Keywords:</b> Perception, Convolutional Neural Network (CNN), Deep Learning Algorithms, Radiology-Pathology Integration, Unsupervised Learning, CoRaX, Perceptual Error, Referral, Deferral <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Levi and Laghi in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240277"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"You'll Never Look Alone": Embedding Second-Look AI into the Radiologist's Workflow. “你永远不会孤单”:将第二眼人工智能嵌入放射科医生的工作流程。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.250575
Riccardo Levi, Andrea Laghi
{"title":"\"You'll Never Look Alone\": Embedding Second-Look AI into the Radiologist's Workflow.","authors":"Riccardo Levi, Andrea Laghi","doi":"10.1148/ryai.250575","DOIUrl":"10.1148/ryai.250575","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250575"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI. 多序列MRI对腰椎棘旁肌肉自动分割和组成分析的多中心验证。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240833
Zhongyi Zhang, Julie A Hides, Enrico De Martino, Janet R Millner, Gervase Tuxworth
{"title":"Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI.","authors":"Zhongyi Zhang, Julie A Hides, Enrico De Martino, Janet R Millner, Gervase Tuxworth","doi":"10.1148/ryai.240833","DOIUrl":"10.1148/ryai.240833","url":null,"abstract":"<p><p>Chronic low back pain is a global health issue with considerable socioeconomic burdens and is associated with changes in lumbar paraspinal muscles (LPMs). In this retrospective study, a deep learning method was trained and externally validated for automated LPM segmentation, muscle volume quantification, and fatty infiltration assessment across multisequence MR images. A total of 1302 MR images from 641 participants across five centers were included. Data from two centers were used for model training and tuning, while data from the remaining three centers were used for external testing. Model segmentation performance was evaluated against manual segmentation using the Dice similarity coefficient (DSC), and measurement accuracy was assessed using two one-sided tests and intraclass correlation coefficients (ICCs). The model achieved global DSC values of 0.98 on the internal test set and 0.93 to 0.97 on external test sets. Statistical equivalence between automated and manual measurements of muscle volume and fat ratio was confirmed in most regions (<i>P</i> < .05). Agreement between automated and manual measurements was high (ICCs > 0.92). In conclusion, the proposed automated method accurately segmented LPM and demonstrated statistical equivalence to manual measurements of muscle volume and fatty infiltration ratio across multisequence, multicenter MR images. <b>Keywords:</b> MR-Imaging, Muscular, Volume Analysis, Segmentation, Vision, Application Domain, Quantification, Supervised Learning Type of Machine Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240833"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信