Deep Learning in Multimodal Remote Sensing Data Fusion: A Comprehensive Review

Jiaxin Li, D. Hong, Lianru Gao, Jing Yao, Ke-xin Zheng, Bing Zhang, J. Chanussot
{"title":"Deep Learning in Multimodal Remote Sensing Data Fusion: A Comprehensive Review","authors":"Jiaxin Li, D. Hong, Lianru Gao, Jing Yao, Ke-xin Zheng, Bing Zhang, J. Chanussot","doi":"10.48550/arXiv.2205.01380","DOIUrl":null,"url":null,"abstract":"With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity is readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way. With the joint utilization of EO data, much research on multimodal RS data fusion has made tremendous progress in recent years, yet these developed traditional algorithms inevitably meet the performance bottleneck due to the lack of the ability to comprehensively analyse and interpret these strongly heterogeneous data. Hence, this non-negligible limitation further arouses an intense demand for an alternative tool with powerful processing competence. Deep learning (DL), as a cutting-edge technology, has witnessed remarkable breakthroughs in numerous computer vision tasks owing to its impressive ability in data representation and reconstruction. Naturally, it has been successfully applied to the field of multimodal RS data fusion, yielding great improvement compared with traditional methods. This survey aims to present a systematic overview in DL-based multimodal RS data fusion. More specifically, some essential knowledge about this topic is first given. Subsequently, a literature survey is conducted to analyse the trends of this field. Some prevalent sub-fields in the multimodal RS data fusion are then reviewed in terms of the to-be-fused data modalities, i.e., spatiospectral, spatiotemporal, light detection and ranging-optical, synthetic aperture radar-optical, and RS-Geospatial Big Data fusion. Furthermore, We collect and summarize some valuable resources for the sake of the development in multimodal RS data fusion. Finally, the remaining challenges and potential future directions are highlighted.","PeriodicalId":13664,"journal":{"name":"Int. J. Appl. Earth Obs. Geoinformation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"88","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Appl. Earth Obs. Geoinformation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2205.01380","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 88

Abstract

With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity is readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way. With the joint utilization of EO data, much research on multimodal RS data fusion has made tremendous progress in recent years, yet these developed traditional algorithms inevitably meet the performance bottleneck due to the lack of the ability to comprehensively analyse and interpret these strongly heterogeneous data. Hence, this non-negligible limitation further arouses an intense demand for an alternative tool with powerful processing competence. Deep learning (DL), as a cutting-edge technology, has witnessed remarkable breakthroughs in numerous computer vision tasks owing to its impressive ability in data representation and reconstruction. Naturally, it has been successfully applied to the field of multimodal RS data fusion, yielding great improvement compared with traditional methods. This survey aims to present a systematic overview in DL-based multimodal RS data fusion. More specifically, some essential knowledge about this topic is first given. Subsequently, a literature survey is conducted to analyse the trends of this field. Some prevalent sub-fields in the multimodal RS data fusion are then reviewed in terms of the to-be-fused data modalities, i.e., spatiospectral, spatiotemporal, light detection and ranging-optical, synthetic aperture radar-optical, and RS-Geospatial Big Data fusion. Furthermore, We collect and summarize some valuable resources for the sake of the development in multimodal RS data fusion. Finally, the remaining challenges and potential future directions are highlighted.
深度学习在多模态遥感数据融合中的应用综述
随着遥感技术的飞速发展,大量的地球观测数据具有相当大的、复杂的异质性,这为研究人员提供了一个新的解决当前地球科学应用问题的机会。随着EO数据的联合利用,近年来对多模态遥感数据融合的研究取得了巨大进展,但由于缺乏对这些强异构数据进行综合分析和解释的能力,这些传统算法不可避免地遇到了性能瓶颈。因此,这种不可忽视的限制进一步引起了对具有强大加工能力的替代工具的强烈需求。深度学习作为一项前沿技术,由于其在数据表示和重建方面令人印象深刻的能力,在许多计算机视觉任务中取得了显著的突破。自然,它已经成功地应用于多模态遥感数据融合领域,与传统方法相比有了很大的改进。本文旨在对基于dl的多模态遥感数据融合进行系统的综述。更具体地说,首先给出了关于这个主题的一些基本知识。随后,进行了文献调查,分析了该领域的发展趋势。然后,根据待融合的数据模式,对多模态遥感数据融合的一些流行子领域进行了综述,即空间光谱、时空、光探测和测距光学、合成孔径雷达光学和遥感-地理空间大数据融合。此外,我们还收集和总结了一些有价值的资源,为多模态遥感数据融合的发展提供参考。最后,强调了存在的挑战和潜在的未来方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信