Chao Zhang, Xin Lu, Q. Ye, Chao Wang, Chuan-Sheng Yang, Quanqing Wang
{"title":"遥感语义分割的多特征提取网络","authors":"Chao Zhang, Xin Lu, Q. Ye, Chao Wang, Chuan-Sheng Yang, Quanqing Wang","doi":"10.1109/ICSP54964.2022.9778622","DOIUrl":null,"url":null,"abstract":"In this paper, we tackle the remote sensing semantic segmentation task by capturing feature information across multiple scales, all channels, and global locations. Different from previous works that simply use U-net to extract multi-scale features, we further improve U-net and propose a Multi-Feature Extraction Network (MFE-Unet). Specifically, we propose the MFE module, which uses both dilated convolution module and two attention modules. Dilated convolution is used to enhance U-net’s ability to represent multi-scale information. The two attention modules refer to the channel attention module and the pixel attention module. Channel attention maps all channels centrally, assigns weights uniformly, and adaptively adjusts the importance of each channel’s information. Pixel attention treats features at each location as the same individual, and similar features will be associated together to further improve feature representation. We conducted multiple sets of experiments on the \"AI+\" remote sensing image dataset. Experiments show that our network is sufficient against several advanced models.","PeriodicalId":363766,"journal":{"name":"2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MFENet: Multi-Feature Extraction Net for Remote Sensing Semantic Segmentation\",\"authors\":\"Chao Zhang, Xin Lu, Q. Ye, Chao Wang, Chuan-Sheng Yang, Quanqing Wang\",\"doi\":\"10.1109/ICSP54964.2022.9778622\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we tackle the remote sensing semantic segmentation task by capturing feature information across multiple scales, all channels, and global locations. Different from previous works that simply use U-net to extract multi-scale features, we further improve U-net and propose a Multi-Feature Extraction Network (MFE-Unet). Specifically, we propose the MFE module, which uses both dilated convolution module and two attention modules. Dilated convolution is used to enhance U-net’s ability to represent multi-scale information. The two attention modules refer to the channel attention module and the pixel attention module. Channel attention maps all channels centrally, assigns weights uniformly, and adaptively adjusts the importance of each channel’s information. Pixel attention treats features at each location as the same individual, and similar features will be associated together to further improve feature representation. We conducted multiple sets of experiments on the \\\"AI+\\\" remote sensing image dataset. Experiments show that our network is sufficient against several advanced models.\",\"PeriodicalId\":363766,\"journal\":{\"name\":\"2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSP54964.2022.9778622\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSP54964.2022.9778622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MFENet: Multi-Feature Extraction Net for Remote Sensing Semantic Segmentation
In this paper, we tackle the remote sensing semantic segmentation task by capturing feature information across multiple scales, all channels, and global locations. Different from previous works that simply use U-net to extract multi-scale features, we further improve U-net and propose a Multi-Feature Extraction Network (MFE-Unet). Specifically, we propose the MFE module, which uses both dilated convolution module and two attention modules. Dilated convolution is used to enhance U-net’s ability to represent multi-scale information. The two attention modules refer to the channel attention module and the pixel attention module. Channel attention maps all channels centrally, assigns weights uniformly, and adaptively adjusts the importance of each channel’s information. Pixel attention treats features at each location as the same individual, and similar features will be associated together to further improve feature representation. We conducted multiple sets of experiments on the "AI+" remote sensing image dataset. Experiments show that our network is sufficient against several advanced models.