{"title":"基于结构感知的双生成器生成对抗网络医学图像分割","authors":"Dongfang Shen, Yijiang Chen, Yu Wu, Wenkang Fan, Xióngbiao Luó","doi":"10.1145/3561613.3561614","DOIUrl":null,"url":null,"abstract":"Unsupervised domain adaptation has attracted a lot of attentions in medical image analysis because it can train models to multimodal domains without data annotation. This work proposes a new end-to-end medical image translation segmentation framework that uses structure-aware dual generator adversarial networks. Specifically, our framework introduces a pair of generators to replace an original single generator, while it also employs two structure-aware mechanisms: (1) image edge or structural information enhancement to improve image translation in the dual generator and (2) an additional loss on the basis of the structural similarity index measure to train constrain the network model. We evaluate the proposed method on medical CT segmentation of our liver data and public abdominal multiorgan data, with the experimental results shows that our proposed segmentation framework certainly outperforms other unsupervised segmentation methods. Particularly, the average dice scores of live and multiorgan CT segmentation were improved from (84.7%, 66.2%) to (91.8%, 79.3%) as well as the average symmetric surface distances were reduced from (2.19, 3.8) to (0.90, 2.0).","PeriodicalId":348024,"journal":{"name":"Proceedings of the 5th International Conference on Control and Computer Vision","volume":"253 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Structural-Aware Dual Generator Generative Adversarial Nets for Medical Image Segmentation\",\"authors\":\"Dongfang Shen, Yijiang Chen, Yu Wu, Wenkang Fan, Xióngbiao Luó\",\"doi\":\"10.1145/3561613.3561614\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unsupervised domain adaptation has attracted a lot of attentions in medical image analysis because it can train models to multimodal domains without data annotation. This work proposes a new end-to-end medical image translation segmentation framework that uses structure-aware dual generator adversarial networks. Specifically, our framework introduces a pair of generators to replace an original single generator, while it also employs two structure-aware mechanisms: (1) image edge or structural information enhancement to improve image translation in the dual generator and (2) an additional loss on the basis of the structural similarity index measure to train constrain the network model. We evaluate the proposed method on medical CT segmentation of our liver data and public abdominal multiorgan data, with the experimental results shows that our proposed segmentation framework certainly outperforms other unsupervised segmentation methods. Particularly, the average dice scores of live and multiorgan CT segmentation were improved from (84.7%, 66.2%) to (91.8%, 79.3%) as well as the average symmetric surface distances were reduced from (2.19, 3.8) to (0.90, 2.0).\",\"PeriodicalId\":348024,\"journal\":{\"name\":\"Proceedings of the 5th International Conference on Control and Computer Vision\",\"volume\":\"253 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th International Conference on Control and Computer Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3561613.3561614\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Control and Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3561613.3561614","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Structural-Aware Dual Generator Generative Adversarial Nets for Medical Image Segmentation
Unsupervised domain adaptation has attracted a lot of attentions in medical image analysis because it can train models to multimodal domains without data annotation. This work proposes a new end-to-end medical image translation segmentation framework that uses structure-aware dual generator adversarial networks. Specifically, our framework introduces a pair of generators to replace an original single generator, while it also employs two structure-aware mechanisms: (1) image edge or structural information enhancement to improve image translation in the dual generator and (2) an additional loss on the basis of the structural similarity index measure to train constrain the network model. We evaluate the proposed method on medical CT segmentation of our liver data and public abdominal multiorgan data, with the experimental results shows that our proposed segmentation framework certainly outperforms other unsupervised segmentation methods. Particularly, the average dice scores of live and multiorgan CT segmentation were improved from (84.7%, 66.2%) to (91.8%, 79.3%) as well as the average symmetric surface distances were reduced from (2.19, 3.8) to (0.90, 2.0).