Structural-Aware Dual Generator Generative Adversarial Nets for Medical Image Segmentation

Dongfang Shen, Yijiang Chen, Yu Wu, Wenkang Fan, Xióngbiao Luó
{"title":"Structural-Aware Dual Generator Generative Adversarial Nets for Medical Image Segmentation","authors":"Dongfang Shen, Yijiang Chen, Yu Wu, Wenkang Fan, Xióngbiao Luó","doi":"10.1145/3561613.3561614","DOIUrl":null,"url":null,"abstract":"Unsupervised domain adaptation has attracted a lot of attentions in medical image analysis because it can train models to multimodal domains without data annotation. This work proposes a new end-to-end medical image translation segmentation framework that uses structure-aware dual generator adversarial networks. Specifically, our framework introduces a pair of generators to replace an original single generator, while it also employs two structure-aware mechanisms: (1) image edge or structural information enhancement to improve image translation in the dual generator and (2) an additional loss on the basis of the structural similarity index measure to train constrain the network model. We evaluate the proposed method on medical CT segmentation of our liver data and public abdominal multiorgan data, with the experimental results shows that our proposed segmentation framework certainly outperforms other unsupervised segmentation methods. Particularly, the average dice scores of live and multiorgan CT segmentation were improved from (84.7%, 66.2%) to (91.8%, 79.3%) as well as the average symmetric surface distances were reduced from (2.19, 3.8) to (0.90, 2.0).","PeriodicalId":348024,"journal":{"name":"Proceedings of the 5th International Conference on Control and Computer Vision","volume":"253 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Control and Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3561613.3561614","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Unsupervised domain adaptation has attracted a lot of attentions in medical image analysis because it can train models to multimodal domains without data annotation. This work proposes a new end-to-end medical image translation segmentation framework that uses structure-aware dual generator adversarial networks. Specifically, our framework introduces a pair of generators to replace an original single generator, while it also employs two structure-aware mechanisms: (1) image edge or structural information enhancement to improve image translation in the dual generator and (2) an additional loss on the basis of the structural similarity index measure to train constrain the network model. We evaluate the proposed method on medical CT segmentation of our liver data and public abdominal multiorgan data, with the experimental results shows that our proposed segmentation framework certainly outperforms other unsupervised segmentation methods. Particularly, the average dice scores of live and multiorgan CT segmentation were improved from (84.7%, 66.2%) to (91.8%, 79.3%) as well as the average symmetric surface distances were reduced from (2.19, 3.8) to (0.90, 2.0).
基于结构感知的双生成器生成对抗网络医学图像分割
无监督域自适应技术在医学图像分析中受到广泛关注,因为它可以在不标注数据的情况下将模型训练到多模态域。这项工作提出了一种新的端到端医学图像翻译分割框架,该框架使用结构感知双生成器对抗网络。具体来说,我们的框架引入了一对生成器来取代原来的单个生成器,同时它还采用了两种结构感知机制:(1)图像边缘或结构信息增强,以改善双生成器中的图像平移;(2)基于结构相似度指标度量的额外损失,以训练约束网络模型。我们对我们的肝脏数据和公共腹部多器官数据的医学CT分割进行了评估,实验结果表明,我们提出的分割框架确实优于其他无监督分割方法。特别是,活体和多器官CT分割的平均骰子分数从(84.7%,66.2%)提高到(91.8%,79.3%),平均对称表面距离从(2.19,3.8)降低到(0.90,2.0)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信