Diaoyin Tan, Yu Liu, Huaxin Xiao, Yang Peng, Maojun Zhang
{"title":"将卷积引入到基于变压器的弱监督语义分割","authors":"Diaoyin Tan, Yu Liu, Huaxin Xiao, Yang Peng, Maojun Zhang","doi":"10.1109/ICCC56324.2022.10065791","DOIUrl":null,"url":null,"abstract":"Weakly supervised semantic segmentation(WSSS) is a challenging task, which only requires category information for segmentation prediction. Existing WSSS methods can be divided into two types: CNN-based and transformer-based, and the ways of generating pseudo labels are different. The former uses Class Activation Mapping(Cam)to generate pseudo labels, but there is a problem that the activated areas are concentrated in the most discriminative parts. The latter one choose to use attention map from the multi-head self-attention(MHSA) block, but there also exist the problems of significant background noise and incoherent object area. In order to solve the problems above, we propose ICTCAM to help transformer block obtain the ability of CNN, which include two modules named deeper stem(DStem) and convolutional feed-forward network(CFFN). The experiment results show that our modules have improved the performance of the network and achieve 69.9% mIoU, which is a new state-of-the-art performance on the PASCAL VOC 2012 dataset compared with similar networks.","PeriodicalId":263098,"journal":{"name":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ICTCAM: Introducing Convolution to Transformer-Based Weakly Supervised Semantic Segmentation\",\"authors\":\"Diaoyin Tan, Yu Liu, Huaxin Xiao, Yang Peng, Maojun Zhang\",\"doi\":\"10.1109/ICCC56324.2022.10065791\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Weakly supervised semantic segmentation(WSSS) is a challenging task, which only requires category information for segmentation prediction. Existing WSSS methods can be divided into two types: CNN-based and transformer-based, and the ways of generating pseudo labels are different. The former uses Class Activation Mapping(Cam)to generate pseudo labels, but there is a problem that the activated areas are concentrated in the most discriminative parts. The latter one choose to use attention map from the multi-head self-attention(MHSA) block, but there also exist the problems of significant background noise and incoherent object area. In order to solve the problems above, we propose ICTCAM to help transformer block obtain the ability of CNN, which include two modules named deeper stem(DStem) and convolutional feed-forward network(CFFN). The experiment results show that our modules have improved the performance of the network and achieve 69.9% mIoU, which is a new state-of-the-art performance on the PASCAL VOC 2012 dataset compared with similar networks.\",\"PeriodicalId\":263098,\"journal\":{\"name\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 8th International Conference on Computer and Communications (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCC56324.2022.10065791\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC56324.2022.10065791","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ICTCAM: Introducing Convolution to Transformer-Based Weakly Supervised Semantic Segmentation
Weakly supervised semantic segmentation(WSSS) is a challenging task, which only requires category information for segmentation prediction. Existing WSSS methods can be divided into two types: CNN-based and transformer-based, and the ways of generating pseudo labels are different. The former uses Class Activation Mapping(Cam)to generate pseudo labels, but there is a problem that the activated areas are concentrated in the most discriminative parts. The latter one choose to use attention map from the multi-head self-attention(MHSA) block, but there also exist the problems of significant background noise and incoherent object area. In order to solve the problems above, we propose ICTCAM to help transformer block obtain the ability of CNN, which include two modules named deeper stem(DStem) and convolutional feed-forward network(CFFN). The experiment results show that our modules have improved the performance of the network and achieve 69.9% mIoU, which is a new state-of-the-art performance on the PASCAL VOC 2012 dataset compared with similar networks.