FTransDeepLab:基于多模态融合变换器的 DeepLabv3+ 用于遥感语义分割

IF 7.5 1区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Haixia Feng;Qingwu Hu;Pengcheng Zhao;Shunli Wang;Mingyao Ai;Daoyuan Zheng;Tiancheng Liu
{"title":"FTransDeepLab:基于多模态融合变换器的 DeepLabv3+ 用于遥感语义分割","authors":"Haixia Feng;Qingwu Hu;Pengcheng Zhao;Shunli Wang;Mingyao Ai;Daoyuan Zheng;Tiancheng Liu","doi":"10.1109/TGRS.2025.3553478","DOIUrl":null,"url":null,"abstract":"High-resolution remote sensing images contain rich color and texture information, but due to the inherent limitations of 2-D data, achieving high-quality semantic segmentation remains a challenge. Multimodal data fusion technology has emerged as an effective approach to overcome this issue. To accurately capture the semantic information in remote sensing images, this study designs a multimodal fusion Transformer-based DeepLabv3+ model for remote sensing semantic segmentation, named FTransDeepLab. Specifically, the network learns features from two modalities and is inspired by the DeepLab architecture. We extended the encoder by stacking the multiscale Segformer, encoding the input images into highly representative spatial features. Additionally, we introduced the multimodal feature rectification (MFR) module and the multimodal feature fusion (MFF) module. The MFR, composed of a channel attention module and a spatial attention module, enhances the model’s ability to capture essential features and improves performance by focusing on both global and local contexts. The MFF module utilizes a cross-attention mechanism to optimize the feature fusion process, which enhances representation learning by facilitating the interaction between diverse information and integrates features from different modalities. Finally, in the decoding path, the extracted high-level features are concatenated with low-level features to optimize the feature representation and upsampled to restore the size of input image. Extensive results on two datasets, the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen and Potsdam, have confirmed that the proposed FTransDeepLab can achieve superior performance compared to the state-of-the-art segmentation methods.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-18"},"PeriodicalIF":7.5000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FTransDeepLab: Multimodal Fusion Transformer-Based DeepLabv3+ for Remote Sensing Semantic Segmentation\",\"authors\":\"Haixia Feng;Qingwu Hu;Pengcheng Zhao;Shunli Wang;Mingyao Ai;Daoyuan Zheng;Tiancheng Liu\",\"doi\":\"10.1109/TGRS.2025.3553478\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"High-resolution remote sensing images contain rich color and texture information, but due to the inherent limitations of 2-D data, achieving high-quality semantic segmentation remains a challenge. Multimodal data fusion technology has emerged as an effective approach to overcome this issue. To accurately capture the semantic information in remote sensing images, this study designs a multimodal fusion Transformer-based DeepLabv3+ model for remote sensing semantic segmentation, named FTransDeepLab. Specifically, the network learns features from two modalities and is inspired by the DeepLab architecture. We extended the encoder by stacking the multiscale Segformer, encoding the input images into highly representative spatial features. Additionally, we introduced the multimodal feature rectification (MFR) module and the multimodal feature fusion (MFF) module. The MFR, composed of a channel attention module and a spatial attention module, enhances the model’s ability to capture essential features and improves performance by focusing on both global and local contexts. The MFF module utilizes a cross-attention mechanism to optimize the feature fusion process, which enhances representation learning by facilitating the interaction between diverse information and integrates features from different modalities. Finally, in the decoding path, the extracted high-level features are concatenated with low-level features to optimize the feature representation and upsampled to restore the size of input image. Extensive results on two datasets, the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen and Potsdam, have confirmed that the proposed FTransDeepLab can achieve superior performance compared to the state-of-the-art segmentation methods.\",\"PeriodicalId\":13213,\"journal\":{\"name\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"volume\":\"63 \",\"pages\":\"1-18\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10937095/\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10937095/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

本文章由计算机程序翻译,如有差异,请以英文原文为准。
FTransDeepLab: Multimodal Fusion Transformer-Based DeepLabv3+ for Remote Sensing Semantic Segmentation
High-resolution remote sensing images contain rich color and texture information, but due to the inherent limitations of 2-D data, achieving high-quality semantic segmentation remains a challenge. Multimodal data fusion technology has emerged as an effective approach to overcome this issue. To accurately capture the semantic information in remote sensing images, this study designs a multimodal fusion Transformer-based DeepLabv3+ model for remote sensing semantic segmentation, named FTransDeepLab. Specifically, the network learns features from two modalities and is inspired by the DeepLab architecture. We extended the encoder by stacking the multiscale Segformer, encoding the input images into highly representative spatial features. Additionally, we introduced the multimodal feature rectification (MFR) module and the multimodal feature fusion (MFF) module. The MFR, composed of a channel attention module and a spatial attention module, enhances the model’s ability to capture essential features and improves performance by focusing on both global and local contexts. The MFF module utilizes a cross-attention mechanism to optimize the feature fusion process, which enhances representation learning by facilitating the interaction between diverse information and integrates features from different modalities. Finally, in the decoding path, the extracted high-level features are concatenated with low-level features to optimize the feature representation and upsampled to restore the size of input image. Extensive results on two datasets, the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen and Potsdam, have confirmed that the proposed FTransDeepLab can achieve superior performance compared to the state-of-the-art segmentation methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Geoscience and Remote Sensing
IEEE Transactions on Geoscience and Remote Sensing 工程技术-地球化学与地球物理
CiteScore
11.50
自引率
28.00%
发文量
1912
审稿时长
4.0 months
期刊介绍: IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信