External validation of a deep learning model for automatic segmentation of skeletal muscle and adipose tissue on abdominal computed tomography images.

David P J van Dijk,Leroy F Volmer,Ralph Brecheisen,Bibi Martens,Ross D Dolan,Adam S Bryce,David K Chang,Donald C McMillan,Jan H M B Stoot,Malcolm A West,Sander S Rensen,Andre Dekker,Leonard Wee,Steven W M Olde Damink,
{"title":"External validation of a deep learning model for automatic segmentation of skeletal muscle and adipose tissue on abdominal computed tomography images.","authors":"David P J van Dijk,Leroy F Volmer,Ralph Brecheisen,Bibi Martens,Ross D Dolan,Adam S Bryce,David K Chang,Donald C McMillan,Jan H M B Stoot,Malcolm A West,Sander S Rensen,Andre Dekker,Leonard Wee,Steven W M Olde Damink,","doi":"10.1093/bjr/tqae191","DOIUrl":null,"url":null,"abstract":"BACKGROUND\r\nBody composition assessment using computed tomography (CT) images at the L3-level is increasingly applied in cancer research. Robust high-throughput automated segmentation is key to assess large patient cohorts and to support implementation of body composition analysis into routine clinical practice. We trained and externally validated a deep learning neural network (DLNN) to automatically segment L3-CT images.\r\n\r\nMETHODS\r\nExpert-drawn segmentations of visceral and subcutaneous adipose tissue (VAT/SAT) and skeletal muscle (SM) of L3-CT-images of 3,187 patients undergoing abdominal surgery were used to train a DLNN. The external validation cohort was comprised of 2,535 patients with abdominal cancer. DLNN performance was evaluated with (geometric) Dice Similarity (DS) and Lin's Concordance Correlation Coefficient.\r\n\r\nRESULTS\r\nThere was a strong concordance between automatic and manual segmentations with median DS for SM, VAT, and SAT of 0.97 (interquartile range, IQR: 0.95-0.98), 0.98 (IQR: 0.95-0.98), and 0.95 (IQR: 0.92-0.97), respectively. Concordance correlations were excellent: SM 0.964 (0.959-0.968), VAT 0.998 (0.998-0.998), and SAT 0.992 (0.991-0.993). Bland-Altman metrics indicated only small and clinically insignificant systematic offsets; SM radiodensity: 0.23 hounsfield units (0.5%), SM: 1.26 cm2.m-2 (2.8%), VAT: -1.02 cm2.m-2 (1.7%), and SAT: 3.24 cm2.m-2 (4.6%).\r\n\r\nCONCLUSION\r\nA robustly-performing and independently externally validated DLNN for automated body composition analysis was developed.\r\n\r\nADVANCES IN KNOWLEDGE\r\nCT-based body composition analysis is highly prognostic for long-term overall survival in oncology. This DLNN was succesfully trained and externally validated on several large patient cohorts and will therefore enable large scale population studies and implementation of body composition analysis into clinical practice.","PeriodicalId":516851,"journal":{"name":"The British Journal of Radiology","volume":"74 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The British Journal of Radiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/bjr/tqae191","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

BACKGROUND Body composition assessment using computed tomography (CT) images at the L3-level is increasingly applied in cancer research. Robust high-throughput automated segmentation is key to assess large patient cohorts and to support implementation of body composition analysis into routine clinical practice. We trained and externally validated a deep learning neural network (DLNN) to automatically segment L3-CT images. METHODS Expert-drawn segmentations of visceral and subcutaneous adipose tissue (VAT/SAT) and skeletal muscle (SM) of L3-CT-images of 3,187 patients undergoing abdominal surgery were used to train a DLNN. The external validation cohort was comprised of 2,535 patients with abdominal cancer. DLNN performance was evaluated with (geometric) Dice Similarity (DS) and Lin's Concordance Correlation Coefficient. RESULTS There was a strong concordance between automatic and manual segmentations with median DS for SM, VAT, and SAT of 0.97 (interquartile range, IQR: 0.95-0.98), 0.98 (IQR: 0.95-0.98), and 0.95 (IQR: 0.92-0.97), respectively. Concordance correlations were excellent: SM 0.964 (0.959-0.968), VAT 0.998 (0.998-0.998), and SAT 0.992 (0.991-0.993). Bland-Altman metrics indicated only small and clinically insignificant systematic offsets; SM radiodensity: 0.23 hounsfield units (0.5%), SM: 1.26 cm2.m-2 (2.8%), VAT: -1.02 cm2.m-2 (1.7%), and SAT: 3.24 cm2.m-2 (4.6%). CONCLUSION A robustly-performing and independently externally validated DLNN for automated body composition analysis was developed. ADVANCES IN KNOWLEDGE CT-based body composition analysis is highly prognostic for long-term overall survival in oncology. This DLNN was succesfully trained and externally validated on several large patient cohorts and will therefore enable large scale population studies and implementation of body composition analysis into clinical practice.
用于在腹部计算机断层扫描图像上自动分割骨骼肌和脂肪组织的深度学习模型的外部验证。
背景使用 L3 层计算机断层扫描(CT)图像进行身体成分评估越来越多地应用于癌症研究。稳健的高通量自动分割是评估大型患者群和支持在常规临床实践中实施身体成分分析的关键。我们对深度学习神经网络(DLNN)进行了训练和外部验证,以自动分割 L3-CT 图像。方法使用专家绘制的内脏和皮下脂肪组织(VAT/SAT)以及骨骼肌(SM)的 L3-CT 图像来训练 DLNN。外部验证队列由 2,535 名腹部癌症患者组成。结果自动分割与手动分割之间具有很高的一致性,SM、VAT 和 SAT 的中位 DS 分别为 0.97(四分位间距:0.95-0.98)、0.98(四分位间距:0.95-0.98)和 0.95(四分位间距:0.92-0.97)。一致性相关性极佳:SM为0.964(0.959-0.968),VAT为0.998(0.998-0.998),SAT为0.992(0.991-0.993)。Bland-Altman 指标仅显示了微小且临床上不显著的系统性偏移;SM 辐射密度:0.23 hounsfield 单位(0.5%),SM:1.26 cm2.m-2 (2.8%),VAT:-1.02 cm2.m-2 (1.7%),SAT:3.24 cm2.m-2 (2.8%):结论 开发出了一种性能强大、经外部独立验证的 DLNN,可用于自动身体成分分析。该 DLNN 在多个大型患者队列中得到了成功的训练和外部验证,因此可以进行大规模的人群研究,并将身体成分分析应用到临床实践中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信