SDCINet: A novel cross-task integration network for segmentation and detection of damaged/changed building targets with optical remote sensing imagery

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
Haiming Zhang , Guorui Ma , Hongyang Fan , Hongyu Gong , Di Wang , Yongxian Zhang
{"title":"SDCINet: A novel cross-task integration network for segmentation and detection of damaged/changed building targets with optical remote sensing imagery","authors":"Haiming Zhang ,&nbsp;Guorui Ma ,&nbsp;Hongyang Fan ,&nbsp;Hongyu Gong ,&nbsp;Di Wang ,&nbsp;Yongxian Zhang","doi":"10.1016/j.isprsjprs.2024.09.024","DOIUrl":null,"url":null,"abstract":"<div><div>Buildings are primary locations for human activities and key focuses in the military domain. Rapidly detecting damaged/changed buildings (DCB) and conducting detailed assessments can effectively aid urbanization monitoring, disaster response, and humanitarian assistance. Currently, the tasks of object detection (OD) and change detection (CD) for DCB are almost independent of each other, making it difficult to simultaneously determine the location and details of changes. Based on this, we have designed a cross-task network called SDCINet, which integrates OD and CD, and have created four dual-task datasets focused on disasters and urbanization. SDCINet is a novel deep learning dual-task framework composed of a consistency encoder, differentiation decoder, and cross-task global attention collaboration module (CGAC). It is capable of modeling differential feature relationships based on bi-temporal images, performing end-to-end pixel-level prediction, and object bounding box regression. The bi-direction traction function of CGAC is used to deeply couple OD and CD tasks. Additionally, we collected bi-temporal images from 10 locations worldwide that experienced earthquakes, explosions, wars, and conflicts to construct two datasets specifically for damaged building OD and CD. We also constructed two datasets for changed building OD and CD based on two publicly available CD datasets. These four datasets can serve as data benchmarks for dual-task research on DCB. Using these datasets, we conducted extensive performance evaluations of 18 state-of-the-art models from the perspectives of OD, CD, and instance segmentation. Benchmark experimental results demonstrated the superior performance of SDCINet. Ablation experiments and evaluative analyses confirmed the effectiveness and unique value of CGAC.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 422-446"},"PeriodicalIF":10.6000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271624003629","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Buildings are primary locations for human activities and key focuses in the military domain. Rapidly detecting damaged/changed buildings (DCB) and conducting detailed assessments can effectively aid urbanization monitoring, disaster response, and humanitarian assistance. Currently, the tasks of object detection (OD) and change detection (CD) for DCB are almost independent of each other, making it difficult to simultaneously determine the location and details of changes. Based on this, we have designed a cross-task network called SDCINet, which integrates OD and CD, and have created four dual-task datasets focused on disasters and urbanization. SDCINet is a novel deep learning dual-task framework composed of a consistency encoder, differentiation decoder, and cross-task global attention collaboration module (CGAC). It is capable of modeling differential feature relationships based on bi-temporal images, performing end-to-end pixel-level prediction, and object bounding box regression. The bi-direction traction function of CGAC is used to deeply couple OD and CD tasks. Additionally, we collected bi-temporal images from 10 locations worldwide that experienced earthquakes, explosions, wars, and conflicts to construct two datasets specifically for damaged building OD and CD. We also constructed two datasets for changed building OD and CD based on two publicly available CD datasets. These four datasets can serve as data benchmarks for dual-task research on DCB. Using these datasets, we conducted extensive performance evaluations of 18 state-of-the-art models from the perspectives of OD, CD, and instance segmentation. Benchmark experimental results demonstrated the superior performance of SDCINet. Ablation experiments and evaluative analyses confirmed the effectiveness and unique value of CGAC.
SDCINet:利用光学遥感图像分割和检测受损/变化建筑目标的新型跨任务集成网络
建筑物是人类活动的主要场所,也是军事领域的关键重点。快速检测受损/发生变化的建筑物(DCB)并进行详细评估可有效帮助城市化监测、灾难响应和人道主义援助。目前,DCB 的目标检测(OD)和变化检测(CD)任务几乎是相互独立的,因此很难同时确定变化的位置和细节。在此基础上,我们设计了一种名为 SDCINet 的跨任务网络,它集成了 OD 和 CD,并创建了四个以灾害和城市化为重点的双任务数据集。SDCINet 是一个新颖的深度学习双任务框架,由一致性编码器、差异化解码器和跨任务全局注意力协作模块(CGAC)组成。它能够基于双时相图像建立差异特征关系模型,执行端到端像素级预测和对象边界框回归。CGAC 的双向牵引功能用于深度耦合 OD 和 CD 任务。此外,我们还从全球 10 个经历过地震、爆炸、战争和冲突的地点收集了双时相图像,构建了两个数据集,专门用于受损建筑的 OD 和 CD。我们还在两个公开的 CD 数据集的基础上,构建了两个变化的建筑外景和内景数据集。这四个数据集可作为 DCB 双任务研究的数据基准。利用这些数据集,我们从 OD、CD 和实例分割的角度对 18 个最先进的模型进行了广泛的性能评估。基准实验结果证明了 SDCINet 的卓越性能。消融实验和评估分析证实了 CGAC 的有效性和独特价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信