{"title":"AMNet","authors":"Baiyi Shu, Jiong Mu, Yu Zhu","doi":"10.1145/3341069.3342988","DOIUrl":null,"url":null,"abstract":"DeepLabv3+ is one of the most accurate algorithms in semantic segmentation. CBAM is an attention mechanism proposed to improve the performance of obect detection model which can be used in a convolutional neural network. Given an intermediate feature map, CBAM sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. In the image segmentation tasks, in order to achieve the goal of enhancing feature representation and improving segmentation accuracy without extra overheads. In this paper, we proposed AMNet which is an end-to-end semantic segmentation network based on DeepLabv3+ which is embeded with CBAM. Further, CBAM activates when the input image passes through CNN.Channel attention module in CBAM focues on 'what' is meaningful given an input image and spatial attention module focus on 'where'. Our network acheives 77.66% mIoU on the PASCAL VOC2012, which is a 2.73% better mIoU than DeepLabv3+ with 6 batchsize using only one single Nvidia 2080 GPU. Beyond that, for getting a faster segmentation model, we also embed the attention mechanism into ENet, one of the fastest lightweight networks. After our evaluation on the Cityscapes dataset, we got a better performance in the case of fast training speed. The feasibility that attention mechanism can be integrated into semantic segmentaion network is proved.","PeriodicalId":411198,"journal":{"name":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"AMNet\",\"authors\":\"Baiyi Shu, Jiong Mu, Yu Zhu\",\"doi\":\"10.1145/3341069.3342988\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"DeepLabv3+ is one of the most accurate algorithms in semantic segmentation. CBAM is an attention mechanism proposed to improve the performance of obect detection model which can be used in a convolutional neural network. Given an intermediate feature map, CBAM sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. In the image segmentation tasks, in order to achieve the goal of enhancing feature representation and improving segmentation accuracy without extra overheads. In this paper, we proposed AMNet which is an end-to-end semantic segmentation network based on DeepLabv3+ which is embeded with CBAM. Further, CBAM activates when the input image passes through CNN.Channel attention module in CBAM focues on 'what' is meaningful given an input image and spatial attention module focus on 'where'. Our network acheives 77.66% mIoU on the PASCAL VOC2012, which is a 2.73% better mIoU than DeepLabv3+ with 6 batchsize using only one single Nvidia 2080 GPU. Beyond that, for getting a faster segmentation model, we also embed the attention mechanism into ENet, one of the fastest lightweight networks. After our evaluation on the Cityscapes dataset, we got a better performance in the case of fast training speed. The feasibility that attention mechanism can be integrated into semantic segmentaion network is proved.\",\"PeriodicalId\":411198,\"journal\":{\"name\":\"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3341069.3342988\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 3rd High Performance Computing and Cluster Technologies Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3341069.3342988","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DeepLabv3+ is one of the most accurate algorithms in semantic segmentation. CBAM is an attention mechanism proposed to improve the performance of obect detection model which can be used in a convolutional neural network. Given an intermediate feature map, CBAM sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. In the image segmentation tasks, in order to achieve the goal of enhancing feature representation and improving segmentation accuracy without extra overheads. In this paper, we proposed AMNet which is an end-to-end semantic segmentation network based on DeepLabv3+ which is embeded with CBAM. Further, CBAM activates when the input image passes through CNN.Channel attention module in CBAM focues on 'what' is meaningful given an input image and spatial attention module focus on 'where'. Our network acheives 77.66% mIoU on the PASCAL VOC2012, which is a 2.73% better mIoU than DeepLabv3+ with 6 batchsize using only one single Nvidia 2080 GPU. Beyond that, for getting a faster segmentation model, we also embed the attention mechanism into ENet, one of the fastest lightweight networks. After our evaluation on the Cityscapes dataset, we got a better performance in the case of fast training speed. The feasibility that attention mechanism can be integrated into semantic segmentaion network is proved.