Le Liu, Jian Su, HuLin Liu, Weiqiang Zhao, Xiaogang Du, Tao Lei
{"title":"MAU-Net: A Multiscale Attention Encoder-decoder Network for Liver and Liver-tumor Segmentation","authors":"Le Liu, Jian Su, HuLin Liu, Weiqiang Zhao, Xiaogang Du, Tao Lei","doi":"10.1145/3512388.3512418","DOIUrl":null,"url":null,"abstract":"U-Net and improved U-Nets suffer from two problems for liver and liver-tumor segmentation. The first is that skip connections in encoder-decoder networks bring interference information. The second is that the convolutional kernel with the fixed receptive field does not match the liver-tumor with changing shape and position. To address the above problems, we propose a multiscale attention encoder-decoder network (MAU-Net) for liver and liver-tumor segmentation. First, MAU-Net employs self-attentive gating guidance module in the skip connection to suppresses irrelevant regions. Secondly, MAU-Net employs a multi-branch feature fusion module to extract multiscale features for the segmentation of liver-tumor. We evaluate the proposed method on the public LiTS dataset. The experimental results show that the average dice of liver and liver-tumor segmentation by MAU-Net are 96.11% and 86.90%, respectively. Experiments demonstrate that MAU-Net is superior to state-of-the-art networks for liver and liver-tumor segmentation.","PeriodicalId":434878,"journal":{"name":"Proceedings of the 2022 5th International Conference on Image and Graphics Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Image and Graphics Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3512388.3512418","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
U-Net and improved U-Nets suffer from two problems for liver and liver-tumor segmentation. The first is that skip connections in encoder-decoder networks bring interference information. The second is that the convolutional kernel with the fixed receptive field does not match the liver-tumor with changing shape and position. To address the above problems, we propose a multiscale attention encoder-decoder network (MAU-Net) for liver and liver-tumor segmentation. First, MAU-Net employs self-attentive gating guidance module in the skip connection to suppresses irrelevant regions. Secondly, MAU-Net employs a multi-branch feature fusion module to extract multiscale features for the segmentation of liver-tumor. We evaluate the proposed method on the public LiTS dataset. The experimental results show that the average dice of liver and liver-tumor segmentation by MAU-Net are 96.11% and 86.90%, respectively. Experiments demonstrate that MAU-Net is superior to state-of-the-art networks for liver and liver-tumor segmentation.