{"title":"Development of multiscale 3D residual U-Net to segment edematous adipose tissue by leveraging annotations from non-edematous adipose tissue","authors":"Jianfei Liu, O. Shafaat, R. Summers","doi":"10.1117/12.2669719","DOIUrl":null,"url":null,"abstract":"Data annotation is often a prerequisite for applying deep learning to medical image segmentation. It is a tedious process that requires substantial guidance from experienced physicians. Adipose tissue labeling on CT scans is particularly time-consuming because adipose tissue is present throughout the entire body. One possible solution is to create inaccurate annotations from conventional (non-deep learning) adipose tissue segmentation methods. This work demonstrates the development of a deep learning model directly from these inaccurate annotations. The model is a multi-scale 3D residual U-Net where the encoder path is composed of residual blocks and the decoder path fuses multi-scale feature maps from different layers of decoder blocks. The training set consisted of 101 patients and the testing set consisted of 14 patients. Ten patients with anasarca were purposely added to the testing dataset as a stress test to evaluate model generality. Anasarca is a medical condition that leads to the generalized accumulation of edema within subcutaneous adipose tissue. Edema creates heterogeneity inside the adipose tissue which is absent in the training data. In comparison with a baseline method of manual annotations, the Dice coefficient improved significantly from 73.4 ± 14.1% to 80.2 ± 7.1% (p < 0.05). The model trained on inaccurate annotations improved the accuracy of adipose tissue segmentation by 7% without the need for any manual annotation.","PeriodicalId":147201,"journal":{"name":"Symposium on Medical Information Processing and Analysis","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symposium on Medical Information Processing and Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2669719","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Data annotation is often a prerequisite for applying deep learning to medical image segmentation. It is a tedious process that requires substantial guidance from experienced physicians. Adipose tissue labeling on CT scans is particularly time-consuming because adipose tissue is present throughout the entire body. One possible solution is to create inaccurate annotations from conventional (non-deep learning) adipose tissue segmentation methods. This work demonstrates the development of a deep learning model directly from these inaccurate annotations. The model is a multi-scale 3D residual U-Net where the encoder path is composed of residual blocks and the decoder path fuses multi-scale feature maps from different layers of decoder blocks. The training set consisted of 101 patients and the testing set consisted of 14 patients. Ten patients with anasarca were purposely added to the testing dataset as a stress test to evaluate model generality. Anasarca is a medical condition that leads to the generalized accumulation of edema within subcutaneous adipose tissue. Edema creates heterogeneity inside the adipose tissue which is absent in the training data. In comparison with a baseline method of manual annotations, the Dice coefficient improved significantly from 73.4 ± 14.1% to 80.2 ± 7.1% (p < 0.05). The model trained on inaccurate annotations improved the accuracy of adipose tissue segmentation by 7% without the need for any manual annotation.