Model Complexity of Deep Residual U-NET for CT Liver Volumetry

Kyoung-jin Park, Sang-hyub Park
{"title":"Model Complexity of Deep Residual U-NET for CT Liver Volumetry","authors":"Kyoung-jin Park, Sang-hyub Park","doi":"10.31320/jksct.2022.24.2.55","DOIUrl":null,"url":null,"abstract":"Computed Tomography (CT) has been used for liver volume measurement because of the highest location accuracy. Automated segmentation methods may improve CT volumetry time, but it has low accuracy. Residual U-Net which is one of the deep learning methods could improve segmentation accuracy. However optimization of residual U-Net hasn’t been demonstrated yet. The purpose of this paper is to investigate the optimal complexity for CT liver volumetry. The study was conducted using the 3D-IRCADb01 Datasets (10 males, 10 females) published by MIS Training Center, 15 people learned and 5 people tested. Segmented images were generated using Deep Residual U-Nets with a total of four different complexity. As a result, as the model became more complex, the total parameters and training time increased exponentially. In all models, both training and testing showed more than 97% accuracy. All losses were less than 0.2. In the case of DCL, it was the lowest at 0.8037 in 3-layer and the highest at 0.9533 in 5-layer. In conclusion, 5 hidden layers of residual U-Net has the highest dice coefficient loss and could train the datasets faster than other complex models.","PeriodicalId":272693,"journal":{"name":"Korean Society of Computed Tomographic Technology","volume":"140 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Korean Society of Computed Tomographic Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31320/jksct.2022.24.2.55","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Computed Tomography (CT) has been used for liver volume measurement because of the highest location accuracy. Automated segmentation methods may improve CT volumetry time, but it has low accuracy. Residual U-Net which is one of the deep learning methods could improve segmentation accuracy. However optimization of residual U-Net hasn’t been demonstrated yet. The purpose of this paper is to investigate the optimal complexity for CT liver volumetry. The study was conducted using the 3D-IRCADb01 Datasets (10 males, 10 females) published by MIS Training Center, 15 people learned and 5 people tested. Segmented images were generated using Deep Residual U-Nets with a total of four different complexity. As a result, as the model became more complex, the total parameters and training time increased exponentially. In all models, both training and testing showed more than 97% accuracy. All losses were less than 0.2. In the case of DCL, it was the lowest at 0.8037 in 3-layer and the highest at 0.9533 in 5-layer. In conclusion, 5 hidden layers of residual U-Net has the highest dice coefficient loss and could train the datasets faster than other complex models.
CT肝脏体积测量中深度残留U-NET的模型复杂度
由于定位精度高,计算机断层扫描(CT)已被用于肝脏体积测量。自动分割方法可以提高CT体积测量时间,但精度较低。残差U-Net是一种深度学习方法,可以提高分割精度。然而,剩余U-Net的优化还没有得到证实。本文的目的是探讨CT肝脏体积测量的最佳复杂度。本研究使用MIS Training Center发布的3D-IRCADb01数据集(男10人,女10人),学习15人,测试5人。使用深度残差U-Nets生成四种不同复杂度的分割图像。因此,随着模型的复杂化,总参数和训练时间呈指数增长。在所有模型中,训练和测试的准确率都超过97%。所有损失均小于0.2。DCL 3层最低,为0.8037,5层最高,为0.9533。综上所述,残差U-Net的5层隐含层具有最高的骰子系数损失,并且可以比其他复杂模型更快地训练数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信