Xiaohui Liu, Lei Zhang, Rui Wang, Xiaoyu Li, Jiyang Xu, Xiaochen Lu
{"title":"基于级联 CNN 和全局-局部注意力变换器网络的高分辨率遥感图像语义分割技术","authors":"Xiaohui Liu, Lei Zhang, Rui Wang, Xiaoyu Li, Jiyang Xu, Xiaochen Lu","doi":"10.1117/1.jrs.18.034502","DOIUrl":null,"url":null,"abstract":"High-resolution remote sensing images (HRRSIs) contain rich local spatial information and long-distance location dependence, which play an important role in semantic segmentation tasks and have received more and more research attention. However, HRRSIs often exhibit large intraclass variance and small interclass variance due to the diversity and complexity of ground objects, thereby bringing great challenges to a semantic segmentation task. In most networks, there are numerous small-scale object omissions and large-scale object fragmentations in the segmentation results because of insufficient local feature extraction and low global information utilization. A network cascaded by convolution neural network and global–local attention transformer is proposed called CNN-transformer cascade network. First, convolution blocks and global–local attention transformer blocks are used to extract multiscale local features and long-range location information, respectively. Then a multilevel channel attention integration block is designed to fuse geometric features and semantic features of different depths and revise the channel weights through the channel attention module to resist the interference of redundant information. Finally, the smoothness of the segmentation is improved through the implementation of upsampling using a deconvolution operation. We compare our method with several state-of-the-art methods on the ISPRS Vaihingen and Potsdam datasets. Experimental results show that our method can improve the integrity and independence of multiscale objects segmentation results.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cascaded CNN and global–local attention transformer network-based semantic segmentation for high-resolution remote sensing image\",\"authors\":\"Xiaohui Liu, Lei Zhang, Rui Wang, Xiaoyu Li, Jiyang Xu, Xiaochen Lu\",\"doi\":\"10.1117/1.jrs.18.034502\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"High-resolution remote sensing images (HRRSIs) contain rich local spatial information and long-distance location dependence, which play an important role in semantic segmentation tasks and have received more and more research attention. However, HRRSIs often exhibit large intraclass variance and small interclass variance due to the diversity and complexity of ground objects, thereby bringing great challenges to a semantic segmentation task. In most networks, there are numerous small-scale object omissions and large-scale object fragmentations in the segmentation results because of insufficient local feature extraction and low global information utilization. A network cascaded by convolution neural network and global–local attention transformer is proposed called CNN-transformer cascade network. First, convolution blocks and global–local attention transformer blocks are used to extract multiscale local features and long-range location information, respectively. Then a multilevel channel attention integration block is designed to fuse geometric features and semantic features of different depths and revise the channel weights through the channel attention module to resist the interference of redundant information. Finally, the smoothness of the segmentation is improved through the implementation of upsampling using a deconvolution operation. We compare our method with several state-of-the-art methods on the ISPRS Vaihingen and Potsdam datasets. Experimental results show that our method can improve the integrity and independence of multiscale objects segmentation results.\",\"PeriodicalId\":54879,\"journal\":{\"name\":\"Journal of Applied Remote Sensing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1117/1.jrs.18.034502\",\"RegionNum\":4,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1117/1.jrs.18.034502","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
Cascaded CNN and global–local attention transformer network-based semantic segmentation for high-resolution remote sensing image
High-resolution remote sensing images (HRRSIs) contain rich local spatial information and long-distance location dependence, which play an important role in semantic segmentation tasks and have received more and more research attention. However, HRRSIs often exhibit large intraclass variance and small interclass variance due to the diversity and complexity of ground objects, thereby bringing great challenges to a semantic segmentation task. In most networks, there are numerous small-scale object omissions and large-scale object fragmentations in the segmentation results because of insufficient local feature extraction and low global information utilization. A network cascaded by convolution neural network and global–local attention transformer is proposed called CNN-transformer cascade network. First, convolution blocks and global–local attention transformer blocks are used to extract multiscale local features and long-range location information, respectively. Then a multilevel channel attention integration block is designed to fuse geometric features and semantic features of different depths and revise the channel weights through the channel attention module to resist the interference of redundant information. Finally, the smoothness of the segmentation is improved through the implementation of upsampling using a deconvolution operation. We compare our method with several state-of-the-art methods on the ISPRS Vaihingen and Potsdam datasets. Experimental results show that our method can improve the integrity and independence of multiscale objects segmentation results.
期刊介绍:
The Journal of Applied Remote Sensing is a peer-reviewed journal that optimizes the communication of concepts, information, and progress among the remote sensing community.