OSHFNet:用于土地利用分类的光学和SAR影像异构双分支动态融合网络

IF 8.6 Q1 REMOTE SENSING
Chenfang Liu, Yuli Sun, Xianghui Zhang, Yanjie Xu, Lin Lei, Gangyao Kuang
{"title":"OSHFNet:用于土地利用分类的光学和SAR影像异构双分支动态融合网络","authors":"Chenfang Liu,&nbsp;Yuli Sun,&nbsp;Xianghui Zhang,&nbsp;Yanjie Xu,&nbsp;Lin Lei,&nbsp;Gangyao Kuang","doi":"10.1016/j.jag.2025.104609","DOIUrl":null,"url":null,"abstract":"<div><div>Optical and synthetic aperture radar (SAR) images are two of the most widely used remote sensing data sources, providing complementary but structurally consistent information. This complementarity has inspired significant research on their fusion. However, due to the huge difference in image representation between optical and SAR data, this difference will lead to inaccurate information expression when using the same structure to extract features, resulting in poor performance in classification tasks. Therefore, in the feature extraction stage, we analyze the respective advantageous features of optical and SAR images and propose a heterogeneous dual-branch network framework. Our framework exploits the rich local features of optical images and the global structural features of SAR images by using CNN and VMamba as their respective feature extractors. This heterogeneous feature extraction strategy effectively captures the complementary features of different modalities and provides a solid foundation for subsequent feature fusion. Second, in the feature fusion stage, we introduce a global-local dynamic gating fusion module. The use of multi-scale feature extraction and self-attention mechanism ensures comprehensive feature capture, while the dynamic gating mechanism enhances the integration of cross-modal complementary information. Finally, our method achieves excellent performance on medium and high-resolution datasets, showing robustness and adaptability at different resolutions. Notably, it significantly improves the overall classification accuracy while maintaining the accuracy of individual categories. For challenging categories such as roads, our method achieves significant improvements of about 15%.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"141 ","pages":"Article 104609"},"PeriodicalIF":8.6000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"OSHFNet: A heterogeneous dual-branch dynamic fusion network of optical and SAR images for land use classification\",\"authors\":\"Chenfang Liu,&nbsp;Yuli Sun,&nbsp;Xianghui Zhang,&nbsp;Yanjie Xu,&nbsp;Lin Lei,&nbsp;Gangyao Kuang\",\"doi\":\"10.1016/j.jag.2025.104609\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Optical and synthetic aperture radar (SAR) images are two of the most widely used remote sensing data sources, providing complementary but structurally consistent information. This complementarity has inspired significant research on their fusion. However, due to the huge difference in image representation between optical and SAR data, this difference will lead to inaccurate information expression when using the same structure to extract features, resulting in poor performance in classification tasks. Therefore, in the feature extraction stage, we analyze the respective advantageous features of optical and SAR images and propose a heterogeneous dual-branch network framework. Our framework exploits the rich local features of optical images and the global structural features of SAR images by using CNN and VMamba as their respective feature extractors. This heterogeneous feature extraction strategy effectively captures the complementary features of different modalities and provides a solid foundation for subsequent feature fusion. Second, in the feature fusion stage, we introduce a global-local dynamic gating fusion module. The use of multi-scale feature extraction and self-attention mechanism ensures comprehensive feature capture, while the dynamic gating mechanism enhances the integration of cross-modal complementary information. Finally, our method achieves excellent performance on medium and high-resolution datasets, showing robustness and adaptability at different resolutions. Notably, it significantly improves the overall classification accuracy while maintaining the accuracy of individual categories. For challenging categories such as roads, our method achieves significant improvements of about 15%.</div></div>\",\"PeriodicalId\":73423,\"journal\":{\"name\":\"International journal of applied earth observation and geoinformation : ITC journal\",\"volume\":\"141 \",\"pages\":\"Article 104609\"},\"PeriodicalIF\":8.6000,\"publicationDate\":\"2025-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International journal of applied earth observation and geoinformation : ITC journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1569843225002560\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"REMOTE SENSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of applied earth observation and geoinformation : ITC journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569843225002560","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0

摘要

光学和合成孔径雷达(SAR)图像是使用最广泛的两种遥感数据源,它们提供了互补但结构一致的信息。这种互补性激发了对它们融合的重要研究。然而,由于光学和SAR数据在图像表示上的巨大差异,这种差异会导致在使用相同结构提取特征时信息表达不准确,从而导致分类任务性能不佳。因此,在特征提取阶段,我们分析了光学图像和SAR图像各自的优势特征,提出了异构双分支网络框架。我们的框架利用了光学图像丰富的局部特征和SAR图像的全局结构特征,分别使用CNN和VMamba作为特征提取器。这种异构特征提取策略有效地捕获了不同模态的互补特征,为后续的特征融合提供了坚实的基础。其次,在特征融合阶段,引入全局-局部动态门控融合模块。多尺度特征提取和自关注机制保证了特征捕获的全面性,动态门控机制增强了跨模态互补信息的集成。最后,我们的方法在中分辨率和高分辨率数据集上取得了优异的性能,显示出不同分辨率下的鲁棒性和适应性。值得注意的是,它在保持单个类别准确性的同时显著提高了整体分类精度。对于道路等具有挑战性的类别,我们的方法实现了约15%的显着改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
OSHFNet: A heterogeneous dual-branch dynamic fusion network of optical and SAR images for land use classification
Optical and synthetic aperture radar (SAR) images are two of the most widely used remote sensing data sources, providing complementary but structurally consistent information. This complementarity has inspired significant research on their fusion. However, due to the huge difference in image representation between optical and SAR data, this difference will lead to inaccurate information expression when using the same structure to extract features, resulting in poor performance in classification tasks. Therefore, in the feature extraction stage, we analyze the respective advantageous features of optical and SAR images and propose a heterogeneous dual-branch network framework. Our framework exploits the rich local features of optical images and the global structural features of SAR images by using CNN and VMamba as their respective feature extractors. This heterogeneous feature extraction strategy effectively captures the complementary features of different modalities and provides a solid foundation for subsequent feature fusion. Second, in the feature fusion stage, we introduce a global-local dynamic gating fusion module. The use of multi-scale feature extraction and self-attention mechanism ensures comprehensive feature capture, while the dynamic gating mechanism enhances the integration of cross-modal complementary information. Finally, our method achieves excellent performance on medium and high-resolution datasets, showing robustness and adaptability at different resolutions. Notably, it significantly improves the overall classification accuracy while maintaining the accuracy of individual categories. For challenging categories such as roads, our method achieves significant improvements of about 15%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International journal of applied earth observation and geoinformation : ITC journal
International journal of applied earth observation and geoinformation : ITC journal Global and Planetary Change, Management, Monitoring, Policy and Law, Earth-Surface Processes, Computers in Earth Sciences
CiteScore
12.00
自引率
0.00%
发文量
0
审稿时长
77 days
期刊介绍: The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信