Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
{"title":"Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images","authors":"","doi":"10.1016/j.isprsjprs.2024.09.002","DOIUrl":null,"url":null,"abstract":"<div><p>Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":10.6000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092427162400340X/pdfft?md5=05257e0a48272b7c28a6809497111281&pid=1-s2.0-S092427162400340X-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092427162400340X","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.

多样性中的和谐:超高分辨率遥感图像的内容清理变化检测框架
变化检测是地球观测领域的一项重要任务,旨在识别在同一地理区域拍摄的多时相遥感图像之间发生变化的像素。然而,在实际应用中,不同的成像条件和不同的遥感平台会产生伪变化。现有的方法要么忽略了双时相图像之间不同的成像风格,要么通过域自适应转移双时相风格,从而可能丢失地面细节。为了解决这些问题,我们引入了分解表示学习,在保留内容细节的同时减轻成像风格的差异,从而开发出一种名为内容清洗网络(CCNet)的变化检测框架。具体来说,CCNet 将每个输入图像嵌入两个不同的子空间:共享内容空间和私有风格空间。风格空间的分离旨在减少因成像条件不同而产生的风格差异,而提取的内容空间则反映了对变化检测至关重要的语义特征。然后,多分辨率并行结构构建了内容空间编码器,促进了对语义信息和空间细节的稳健特征提取。经过净化的内容特征能够准确检测地表的变化。此外,用于图像复原的轻量级解码器增强了分离空间的独立性和可解释性。为了验证所提出的方法,CCNet 被应用于本研究中收集的五个公共数据集和一个多时数据集。与 11 种先进方法的对比实验证明了 CCNet 的有效性和优越性。实验结果表明,我们的方法能稳健地解决与时间和平台变化相关的问题,使其成为在复杂条件下进行变化检测和支持下游应用的一种有前途的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信