Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Predicting Respiratory Disease Mortality Risk Using Open-Source AI on Chest Radiographs in an Asian Health Screening Population. 在亚洲健康筛查人群胸片上使用开源AI预测呼吸系统疾病死亡风险
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240628
Jong Hyuk Lee, Seung Ho Choi, Hugo J W L Aerts, Jakob Weiss, Vineet K Raghu, Michael T Lu, Jayoun Kim, Seungho Lee, Dongheon Lee, Hyungjin Kim
{"title":"Predicting Respiratory Disease Mortality Risk Using Open-Source AI on Chest Radiographs in an Asian Health Screening Population.","authors":"Jong Hyuk Lee, Seung Ho Choi, Hugo J W L Aerts, Jakob Weiss, Vineet K Raghu, Michael T Lu, Jayoun Kim, Seungho Lee, Dongheon Lee, Hyungjin Kim","doi":"10.1148/ryai.240628","DOIUrl":"10.1148/ryai.240628","url":null,"abstract":"<p><p>Purpose To assess the prognostic value of an open-source deep learning-based chest radiographs algorithm, CXR-Lung-Risk, for stratifying respiratory disease mortality risk among an Asian health screening population using baseline and follow-up chest radiographs. Materials and Methods This single-center, retrospective study analyzed chest radiographs from individuals who underwent health screenings between January 2004 and June 2018. The CXR-Lung-Risk scores from baseline chest radiographs were externally tested for predicting mortality due to lung disease or lung cancer, using competing risk analysis, with adjustments made for clinical factors. The additional value of these risk scores beyond clinical factors was evaluated using the likelihood ratio test. An exploratory analysis was conducted on the CXR-Lung-Risk trajectory over a 3-year follow-up period for individuals in the highest quartile of baseline respiratory disease mortality risk, using a time-series clustering algorithm. Results Among 36 924 individuals (median age, 58 years [IQR, 53-62 years]; 22 352 male), 264 individuals (0.7%) died of respiratory illness, over a median follow-up period of 11.0 years (IQR, 7.8-12.7 years). CXR-Lung-Risk predicted respiratory disease mortality (adjusted hazard ratio [HR] per 5 years: 2.01; 95% CI: 1.76, 2.39; <i>P</i> < .001), offering a prognostic improvement over clinical factors (<i>P</i> < .001). The trajectory analysis identified a subgroup with a continuous increase in CXR-Lung-Risk score, which was associated with poorer outcomes (adjusted HR for respiratory disease mortality: 3.26; 95% CI: 1.20, 8.81; <i>P</i> = .02) compared with the subgroup with a continuous decrease in CXR-Lung-Risk score. Conclusion The open-source CXR-Lung-Risk model predicted respiratory disease mortality in an Asian cohort, enabling a two-layer risk stratification approach through an exploratory longitudinal analysis of baseline and follow-up chest radiographs. <b>Keywords:</b> Conventional Radiography, Thorax, Lung, Mediastinum, Heart, Outcomes Analysis <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Júdice de Mattos Farina and Kuriki in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240628"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Anatomical Federated Network (Dafne): An Open Client-Server Framework for Continuous, Collaborative Improvement of Deep Learning-based Medical Image Segmentation. 深度解剖联合网络(Dafne):一个开放的客户端-服务器框架,用于持续协作改进基于深度学习的医学图像分割。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240097
Francesco Santini, Jakob Wasserthal, Abramo Agosti, Xeni Deligianni, Kevin R Keene, Hermien E Kan, Stefan Sommer, Fengdan Wang, Claudia Weidensteiner, Giulia Manco, Matteo Paoletti, Valentina Mazzoli, Arjun Desai, Anna Pichiecchio
{"title":"Deep Anatomical Federated Network (Dafne): An Open Client-Server Framework for Continuous, Collaborative Improvement of Deep Learning-based Medical Image Segmentation.","authors":"Francesco Santini, Jakob Wasserthal, Abramo Agosti, Xeni Deligianni, Kevin R Keene, Hermien E Kan, Stefan Sommer, Fengdan Wang, Claudia Weidensteiner, Giulia Manco, Matteo Paoletti, Valentina Mazzoli, Arjun Desai, Anna Pichiecchio","doi":"10.1148/ryai.240097","DOIUrl":"10.1148/ryai.240097","url":null,"abstract":"<p><p>Purpose To present and evaluate Dafne (deep anatomical federated network), a freely available decentralized, collaborative deep learning system for the semantic segmentation of radiologic images through federated incremental learning. Materials and Methods Dafne is free software with a client-server architecture. The client side is an advanced user interface that applies the deep learning models stored on the server to the user's data and allows the user to check and refine the prediction. Incremental learning is then performed on the client's side and sent back to the server, where it is integrated into the root model. Dafne was evaluated locally by assessing the performance gain across model generations on 38 MRI datasets of the lower legs and through the analysis of real-world usage statistics (639 use cases). Results Dafne demonstrated a statistical improvement in the accuracy of semantic segmentation over time (average increase of the Dice similarity coefficient by 0.007 points per generation on the local validation set, <i>P</i> < .001). Qualitatively, the models showed enhanced performance on various radiologic image types, including those not present in the initial training sets, indicating good model generalizability. Conclusion Dafne showed improvement in segmentation quality over time, demonstrating potential for learning and generalization. <b>Keywords:</b> Segmentation, Muscular, Open Client-Server Framework <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240097"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo-Contrast-enhanced US via Enhanced Generative Adversarial Networks for Evaluating Tumor Ablation Efficacy. 伪对比增强US通过增强生成对抗网络评估肿瘤消融疗效。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240370
Chen Chen, Jiabin Yu, Zhikang Xu, Changsong Xu, Zubang Zhou, Jindong Hao, Vicky Yang Wang, Jincao Yao, Lingyan Zhou, Chenke Xu, Mei Song, Qi Zhang, Xiaofang Liu, Lin Sui, Yuqi Yan, Tian Jiang, Yahan Zhou, Yingtianqi Wu, Binggang Xiao, Chenjie Xu, Hongmei Mi, Li Yang, Zhiwei Wu, Qingquan He, Jian Chen, Qi Liu, Dong Xu
{"title":"Pseudo-Contrast-enhanced US via Enhanced Generative Adversarial Networks for Evaluating Tumor Ablation Efficacy.","authors":"Chen Chen, Jiabin Yu, Zhikang Xu, Changsong Xu, Zubang Zhou, Jindong Hao, Vicky Yang Wang, Jincao Yao, Lingyan Zhou, Chenke Xu, Mei Song, Qi Zhang, Xiaofang Liu, Lin Sui, Yuqi Yan, Tian Jiang, Yahan Zhou, Yingtianqi Wu, Binggang Xiao, Chenjie Xu, Hongmei Mi, Li Yang, Zhiwei Wu, Qingquan He, Jian Chen, Qi Liu, Dong Xu","doi":"10.1148/ryai.240370","DOIUrl":"10.1148/ryai.240370","url":null,"abstract":"<p><p>Purpose To develop methods for creating pseudo-contrast-enhanced US (CEUS) by using an enhanced generative adversarial network and evaluate its ability to assess tumor ablation effectiveness. Materials and Methods This retrospective study included 1030 patients who underwent thyroid nodule ablation across seven centers from January 2020 to April 2023. A generative adversarial network-based model was developed for direct pseudo-CEUS generation from B-mode US and tested on thyroid, breast, and liver ablation datasets. The reliability of pseudo-CEUS was assessed using structural similarity index (SSIM), color histogram correlation, and mean absolute percentage error against real CEUS. Additionally, a subjective evaluation system was devised to validate its clinical value. The Wilcoxon signed rank test was employed to analyze differences in the data. Results The study included 1030 patients (mean age, 46.9 years ± 12.5 [SD]; 799 female and 231 male patients). For internal test set 1, the mean SSIM was 0.89 ± 0.05, while across external test sets 1-6, mean SSIM values ranged from 0.84 ± 0.08 to 0.88 ± 0.04. Subjective assessments affirmed the method's stability and near-realistic performance in evaluating ablation effectiveness. The thyroid ablation datasets had an average identification score of 0.49 (0.5 indicates indistinguishability), while the similarity average score for all datasets was 4.75 out of 5. Radiologists' assessments of residual blood supply were nearly consistent, with no differences in defining ablation zones between real and pseudo-CEUS. Conclusion The pseudo-CEUS method demonstrated high similarity to real CEUS in evaluating tumor ablation effectiveness. <b>Keywords:</b> Ablation Techniques, Ultrasound, Computer Applications-Virtual Imaging <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240370"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Better Data and Smarter AI: Automated Quality Control for Chest Radiographs. 更好的数据和更智能的人工智能:胸部x光片的自动质量控制。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250135
Masahiro Yanagawa, Junya Sato
{"title":"Better Data and Smarter AI: Automated Quality Control for Chest Radiographs.","authors":"Masahiro Yanagawa, Junya Sato","doi":"10.1148/ryai.250135","DOIUrl":"10.1148/ryai.250135","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250135"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images. 评估Skellytour对全身CT图像的自动骨骼分割。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.240050
Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell
{"title":"Evaluating Skellytour for Automated Skeleton Segmentation from Whole-Body CT Images.","authors":"Daniel C Mann, Michael W Rutherford, Phillip Farmer, Joshua M Eichhorn, Fathima Fijula Palot Manzil, Christopher P Wardell","doi":"10.1148/ryai.240050","DOIUrl":"10.1148/ryai.240050","url":null,"abstract":"<p><p>Purpose To construct and evaluate the performance of a machine learning model for bone segmentation using whole-body CT images. Materials and Methods In this retrospective study, whole-body CT scans (from June 2010 to January 2018) from 90 patients (mean age, 61 years ± 9 [SD]; 45 male, 45 female) with multiple myeloma were manually segmented using 60 labels and subsegmented into cortical and trabecular bone. Segmentations were verified by board-certified radiology and nuclear medicine physicians. The impacts of isotropy, resolution, multiple labeling schemes, and postprocessing were assessed. Model performance was assessed on internal and external test datasets (362 scans) and benchmarked against the TotalSegmentator segmentation model. Performance was assessed using Dice similarity coefficient (DSC), normalized surface distance (NSD), and manual inspection. Results Skellytour achieved consistently high segmentation performance on the internal dataset (DSC: 0.94, NSD: 0.99) and two external datasets (DSC: 0.94, 0.96; NSD: 0.999, 1.0), outperforming TotalSegmentator on the first two datasets. Subsegmentation performance was also high (DSC: 0.95, NSD: 0.995). Skellytour produced finely detailed segmentations, even in low-density bones. Conclusion The study demonstrates that Skellytour is an accurate and generalizable bone segmentation and subsegmentation model for CT data; it is available as a Python package via GitHub <i>(https://github.com/cpwardell/Skellytour)</i>. <b>Keywords:</b> CT, Informatics, Skeletal-Axial, Demineralization-Bone, Comparative Studies, Segmentation, Supervised Learning, Convolutional Neural Network (CNN) <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also commentary by Khosravi and Rouzrokh in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240050"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum for: CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI. CMRxRecon2024:一个多模态,多视图k-空间数据集,促进加速心脏MRI的通用机器学习。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.259001
Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang
{"title":"Erratum for: CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI.","authors":"Zi Wang, Fanwen Wang, Chen Qin, Jun Lyu, Cheng Ouyang, Shuo Wang, Yan Li, Mengyao Yu, Haoyu Zhang, Kunyuan Guo, Zhang Shi, Qirong Li, Ziqiang Xu, Yajing Zhang, Hao Li, Sha Hua, Binghua Chen, Longyu Sun, Mengting Sun, Qing Li, Ying-Hua Chu, Wenjia Bai, Jing Qin, Xiahai Zhuang, Claudia Prieto, Alistair Young, Michael Markl, He Wang, Lian-Ming Wu, Guang Yang, Xiaobo Qu, Chengyan Wang","doi":"10.1148/ryai.259001","DOIUrl":"10.1148/ryai.259001","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e259001"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bone Appetit: Skellytour Sets the Table for Robust Skeletal Segmentation. 骨骼开胃:Skellytour 为稳健的骨骼分割提供了平台。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.250057
Bardia Khosravi, Pouria Rouzrokh
{"title":"Bone Appetit: Skellytour Sets the Table for Robust Skeletal Segmentation.","authors":"Bardia Khosravi, Pouria Rouzrokh","doi":"10.1148/ryai.250057","DOIUrl":"10.1148/ryai.250057","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250057"},"PeriodicalIF":13.2,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950880/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143658888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NNFit: A Self-Supervised Deep Learning Method for Accelerated Quantification of High-Resolution Short-Echo-Time MR Spectroscopy Datasets. NNFit:一种加速量化高分辨率短回波时间MR光谱数据集的自监督深度学习方法。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.230579
Alexander S Giuffrida, Sulaiman Sheriff, Vicki Huang, Brent D Weinberg, Lee A D Cooper, Yuan Liu, Brian J Soher, Michael Treadway, Andrew A Maudsley, Hyunsuk Shim
{"title":"NNFit: A Self-Supervised Deep Learning Method for Accelerated Quantification of High-Resolution Short-Echo-Time MR Spectroscopy Datasets.","authors":"Alexander S Giuffrida, Sulaiman Sheriff, Vicki Huang, Brent D Weinberg, Lee A D Cooper, Yuan Liu, Brian J Soher, Michael Treadway, Andrew A Maudsley, Hyunsuk Shim","doi":"10.1148/ryai.230579","DOIUrl":"10.1148/ryai.230579","url":null,"abstract":"<p><p>Purpose To develop and evaluate the performance of NNFit, a self-supervised deep learning method for quantification of high-resolution short-echo-time (TE) echo-planar spectroscopic imaging (EPSI) datasets, with the goal of addressing the computational bottleneck of conventional spectral quantification methods in the clinical workflow. Materials and Methods This retrospective study included 89 short-TE whole-brain EPSI/generalized autocalibrating partial parallel acquisition scans from clinical trials for glioblastoma (trial 1, May 2014-October 2018) and major depressive disorder (trial 2, 2022-2023). The training dataset included 685 000 spectra from 20 participants (60 scans) in trial 1. The testing dataset included 115 000 spectra from five participants (13 scans) in trial 1 and 145 000 spectra from seven participants (16 scans) in trial 2. A comparative analysis was performed between NNFit and a widely used parametric-modeling spectral quantitation method (FITT). Metabolite maps generated by each method were compared using the structural similarity index measure (SSIM) and linear correlation coefficient (<i>R<sup>2</sup></i>). Radiation treatment volumes for glioblastoma based on metabolite maps were compared using the Dice coefficient and a two-tailed <i>t</i> test. Results Mean SSIMs and <i>R</i><sup>2</sup> values for trial 1 test set data were 0.91 and 0.90 for choline, 0.93 and 0.93 for creatine, 0.93 and 0.93 for <i>N</i>-acetylaspartate, 0.80 and 0.72 for myo-inositol, and 0.59 and 0.47 for glutamate plus glutamine. Mean values for trial 2 test set data were 0.95 and 0.95, 0.98 and 0.97, 0.98 and 0.98, 0.92 and 0.92, and 0.79 and 0.81, respectively. The treatment volumes had a mean Dice coefficient of 0.92. The mean processing times were 90.1 seconds for NNFit and 52.9 minutes for FITT. Conclusion A deep learning approach to spectral quantitation offers performance similar to that of conventional quantification methods for EPSI data, but with faster processing at short TE. <b>Keywords:</b> MR Spectroscopy, Neural Networks, Brain/Brain Stem <i>Supplemental material is available for this article</i>. © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230579"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950874/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Post-Training Network Compression for 3D Medical Image Segmentation: Reducing Computational Efforts via Tucker Decomposition. 三维医学图像分割的训练后网络压缩:通过Tucker分解减少计算量。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.240353
Tobias Weber, Jakob Dexl, David Rügamer, Michael Ingrisch
{"title":"Post-Training Network Compression for 3D Medical Image Segmentation: Reducing Computational Efforts via Tucker Decomposition.","authors":"Tobias Weber, Jakob Dexl, David Rügamer, Michael Ingrisch","doi":"10.1148/ryai.240353","DOIUrl":"10.1148/ryai.240353","url":null,"abstract":"<p><p>Purpose To investigate whether the computational effort of three-dimensional CT-based multiorgan segmentation with TotalSegmentator can be reduced via Tucker decomposition-based network compression. Materials and Methods In this retrospective study, Tucker decomposition was applied to the convolutional kernels of the TotalSegmentator model, an nnU-Net model trained on a comprehensive CT dataset for automatic segmentation of 117 anatomic structures. The proposed approach reduced the floating-point operations and memory required during inference, offering an adjustable trade-off between computational efficiency and segmentation quality. This study used the publicly available TotalSegmentator dataset containing 1228 segmented CT scans and a test subset of 89 CT scans and used various downsampling factors to explore the relationship between model size, inference speed, and segmentation accuracy. Segmentation performance was evaluated using the Dice score. Results The application of Tucker decomposition to the TotalSegmentator model substantially reduced the model parameters and floating-point operations across various compression ratios, with limited loss in segmentation accuracy. Up to 88.17% of the model's parameters were removed, with no evidence of differences in performance compared with the original model for 113 of 117 classes after fine-tuning. Practical benefits varied across different graphics processing unit architectures, with more distinct speedups on less powerful hardware. Conclusion The study demonstrated that post hoc network compression via Tucker decomposition presents a viable strategy for reducing the computational demand of medical image segmentation models without substantially impacting model accuracy. <b>Keywords:</b> Deep Learning, Segmentation, Network Compression, Convolution, Tucker Decomposition <i>Supplemental material is available for this article</i>. © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240353"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editor's Recognition Awards. 编辑嘉许奖。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-03-01 DOI: 10.1148/ryai.250164
Charles E Kahn
{"title":"Editor's Recognition Awards.","authors":"Charles E Kahn","doi":"10.1148/ryai.250164","DOIUrl":"https://doi.org/10.1148/ryai.250164","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 2","pages":"e250164"},"PeriodicalIF":8.1,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信