Deep Learning Radiomics Model Based on Computed Tomography Image for Predicting the Classification of Osteoporotic Vertebral Fractures: Algorithm Development and Validation.

IF 3.8 3区 医学 Q2 MEDICAL INFORMATICS
Jiayi Liu, Lincen Zhang, Yousheng Yuan, Jun Tang, Yongkang Liu, Liang Xia, Jun Zhang
{"title":"Deep Learning Radiomics Model Based on Computed Tomography Image for Predicting the Classification of Osteoporotic Vertebral Fractures: Algorithm Development and Validation.","authors":"Jiayi Liu, Lincen Zhang, Yousheng Yuan, Jun Tang, Yongkang Liu, Liang Xia, Jun Zhang","doi":"10.2196/75665","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Osteoporotic vertebral fractures (OVFs) are common in older adults and often lead to disability if not properly diagnosed and classified. With the increased use of computed tomography (CT) imaging and the development of radiomics and deep learning technologies, there is potential to improve the classification accuracy of OVFs.</p><p><strong>Objective: </strong>This study aims to evaluate the efficacy of a deep learning radiomics model, derived from CT imaging, in accurately classifying OVFs.</p><p><strong>Methods: </strong>The study analyzed 981 patients (aged 50-95 years; 687 women, 294 men), involving 1098 vertebrae, from 3 medical centers who underwent both CT and magnetic resonance imaging examinations. The Assessment System of Thoracolumbar Osteoporotic Fractures (ASTLOF) classified OVFs into Classes 0, 1, and 2. The data were categorized into 4 cohorts: training (n=750), internal validation (n=187), external validation (n=110), and prospective validation (n=51). Deep transfer learning used the ResNet-50 architecture, pretrained on RadImageNet and ImageNet, to extract imaging features. Deep transfer learning-based features were combined with radiomics features and refined using Least Absolute Shrinkage and Selection Operator (LASSO) regression. The performance of 8 machine learning classifiers for OVF classification was assessed using receiver operating characteristic metrics and the \"One-vs-Rest\" approach. Performance comparisons between RadImageNet- and ImageNet-based models were performed using the DeLong test. Shapley Additive Explanations (SHAP) analysis was used to interpret feature importance and the predictive rationale of the optimal fusion model.</p><p><strong>Results: </strong>Feature selection and fusion yielded 33 and 54 fused features for the RadImageNet- and ImageNet-based models, respectively, following pretraining on the training set. The best-performing machine learning algorithms for these 2 deep learning radiomics models were the multilayer perceptron and Light Gradient Boosting Machine (LightGBM). The macro-average area under the curve (AUC) values for the fused models based on RadImageNet and ImageNet were 0.934 and 0.996, respectively, with DeLong test showing no statistically significant difference (P=2.34). The RadImageNet-based model significantly surpassed the ImageNet-based model across internal, external, and prospective validation sets, with macro-average AUCs of 0.837 versus 0.648, 0.773 versus 0.633, and 0.852 versus 0.648, respectively (P<.05). Using the binary \"One-vs-Rest\" approach, the RadImageNet-based fused model achieved superior predictive performance for Class 2 (AUC=0.907, 95% CI 0.805-0.999), with Classes 0 and 1 following (AUC/accuracy=0.829/0.803 and 0.794/0.768, respectively). SHAP analysis provided a visualization of feature importance in the RadImageNet-based fused model, highlighting the top 3 most influential features: cluster shade, mean, and large area low gray level emphasis, and their respective impacts on predictions.</p><p><strong>Conclusions: </strong>The RadImageNet-based fused model using CT imaging data exhibited superior predictive performance compared to the ImageNet-based model, demonstrating significant utility in OVF classification and aiding clinical decision-making for treatment planning. Among the 3 classes, the model performed best in identifying Class 2, followed by Class 0 and Class 1.</p>","PeriodicalId":56334,"journal":{"name":"JMIR Medical Informatics","volume":"13 ","pages":"e75665"},"PeriodicalIF":3.8000,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12396830/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/75665","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Osteoporotic vertebral fractures (OVFs) are common in older adults and often lead to disability if not properly diagnosed and classified. With the increased use of computed tomography (CT) imaging and the development of radiomics and deep learning technologies, there is potential to improve the classification accuracy of OVFs.

Objective: This study aims to evaluate the efficacy of a deep learning radiomics model, derived from CT imaging, in accurately classifying OVFs.

Methods: The study analyzed 981 patients (aged 50-95 years; 687 women, 294 men), involving 1098 vertebrae, from 3 medical centers who underwent both CT and magnetic resonance imaging examinations. The Assessment System of Thoracolumbar Osteoporotic Fractures (ASTLOF) classified OVFs into Classes 0, 1, and 2. The data were categorized into 4 cohorts: training (n=750), internal validation (n=187), external validation (n=110), and prospective validation (n=51). Deep transfer learning used the ResNet-50 architecture, pretrained on RadImageNet and ImageNet, to extract imaging features. Deep transfer learning-based features were combined with radiomics features and refined using Least Absolute Shrinkage and Selection Operator (LASSO) regression. The performance of 8 machine learning classifiers for OVF classification was assessed using receiver operating characteristic metrics and the "One-vs-Rest" approach. Performance comparisons between RadImageNet- and ImageNet-based models were performed using the DeLong test. Shapley Additive Explanations (SHAP) analysis was used to interpret feature importance and the predictive rationale of the optimal fusion model.

Results: Feature selection and fusion yielded 33 and 54 fused features for the RadImageNet- and ImageNet-based models, respectively, following pretraining on the training set. The best-performing machine learning algorithms for these 2 deep learning radiomics models were the multilayer perceptron and Light Gradient Boosting Machine (LightGBM). The macro-average area under the curve (AUC) values for the fused models based on RadImageNet and ImageNet were 0.934 and 0.996, respectively, with DeLong test showing no statistically significant difference (P=2.34). The RadImageNet-based model significantly surpassed the ImageNet-based model across internal, external, and prospective validation sets, with macro-average AUCs of 0.837 versus 0.648, 0.773 versus 0.633, and 0.852 versus 0.648, respectively (P<.05). Using the binary "One-vs-Rest" approach, the RadImageNet-based fused model achieved superior predictive performance for Class 2 (AUC=0.907, 95% CI 0.805-0.999), with Classes 0 and 1 following (AUC/accuracy=0.829/0.803 and 0.794/0.768, respectively). SHAP analysis provided a visualization of feature importance in the RadImageNet-based fused model, highlighting the top 3 most influential features: cluster shade, mean, and large area low gray level emphasis, and their respective impacts on predictions.

Conclusions: The RadImageNet-based fused model using CT imaging data exhibited superior predictive performance compared to the ImageNet-based model, demonstrating significant utility in OVF classification and aiding clinical decision-making for treatment planning. Among the 3 classes, the model performed best in identifying Class 2, followed by Class 0 and Class 1.

基于计算机断层图像的深度学习放射组学模型预测骨质疏松性椎体骨折的分类:算法开发和验证。
背景:骨质疏松性椎体骨折(OVFs)在老年人中很常见,如果不能正确诊断和分类,往往会导致残疾。随着计算机断层扫描(CT)成像的增加以及放射组学和深度学习技术的发展,有可能提高ovf的分类准确性。目的:本研究旨在评估基于CT成像的深度学习放射组学模型对ovf准确分类的有效性。方法:本研究分析了来自3个医疗中心的981例患者(年龄50-95岁,女性687例,男性294例),涉及1098个椎骨,均接受了CT和磁共振成像检查。胸腰椎骨质疏松性骨折评估系统(ASTLOF)将ovf分为0、1、2级。数据分为4组:训练组(n=750)、内部验证组(n=187)、外部验证组(n=110)和前瞻性验证组(n=51)。深度迁移学习使用ResNet-50架构,在RadImageNet和ImageNet上进行预训练,提取图像特征。基于深度迁移学习的特征与放射组学特征相结合,并使用最小绝对收缩和选择算子(LASSO)回归进行细化。使用接收器操作特征指标和“One-vs-Rest”方法评估8个OVF分类器的性能。使用DeLong测试对基于RadImageNet和基于imagenet的模型进行性能比较。Shapley加性解释(SHAP)分析用于解释特征重要性和最优融合模型的预测原理。结果:在训练集上进行预训练后,基于RadImageNet和imagenet的模型的特征选择和融合分别产生33个和54个融合特征。这两种深度学习放射组学模型中表现最好的机器学习算法是多层感知机和光梯度增强机(LightGBM)。基于RadImageNet和ImageNet融合模型的宏观平均曲线下面积(AUC)值分别为0.934和0.996,经DeLong检验差异无统计学意义(P=2.34)。基于radimagenet的模型在内部、外部和前瞻性验证集上均显著优于基于imagenet的模型,宏观平均auc分别为0.837 vs 0.648、0.773 vs 0.633、0.852 vs 0.648 (p结论:与基于imagenet的模型相比,基于radimagenet的融合模型使用CT成像数据显示出更好的预测性能,在OVF分类和辅助临床治疗计划决策方面具有重要作用。在3个类别中,模型对类别2的识别效果最好,其次是类别0和类别1。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Medical Informatics
JMIR Medical Informatics Medicine-Health Informatics
CiteScore
7.90
自引率
3.10%
发文量
173
审稿时长
12 weeks
期刊介绍: JMIR Medical Informatics (JMI, ISSN 2291-9694) is a top-rated, tier A journal which focuses on clinical informatics, big data in health and health care, decision support for health professionals, electronic health records, ehealth infrastructures and implementation. It has a focus on applied, translational research, with a broad readership including clinicians, CIOs, engineers, industry and health informatics professionals. Published by JMIR Publications, publisher of the Journal of Medical Internet Research (JMIR), the leading eHealth/mHealth journal (Impact Factor 2016: 5.175), JMIR Med Inform has a slightly different scope (emphasizing more on applications for clinicians and health professionals rather than consumers/citizens, which is the focus of JMIR), publishes even faster, and also allows papers which are more technical or more formative than what would be published in the Journal of Medical Internet Research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信