MATNet: Multilevel attention-based transformers for change detection in remote sensing images

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhongyu Zhang , Shujun Liu , Yingxiang Qin , Huajun Wang
{"title":"MATNet: Multilevel attention-based transformers for change detection in remote sensing images","authors":"Zhongyu Zhang ,&nbsp;Shujun Liu ,&nbsp;Yingxiang Qin ,&nbsp;Huajun Wang","doi":"10.1016/j.imavis.2024.105294","DOIUrl":null,"url":null,"abstract":"<div><div>Remote sensing image change detection is crucial for natural disaster monitoring and land use change. As the resolution increases, the scenes covered by remote sensing images become more complex, and traditional methods have difficulties in extracting detailed information. With the development of deep learning, the field of change detection has new opportunities. However, existing algorithms mainly focus on the difference analysis between bi-temporal images, while ignoring the semantic information between images, resulting in the global and local information not being able to interact effectively. In this paper, we introduce a new transformer-based multilevel attention network (MATNet), which is capable of extracting multilevel features of global and local information, enabling information interaction and fusion, and thus modeling the global context more effectively. Specifically, we extract multilevel semantic features through the Transformer encoder, and utilize the Feature Enhancement Module (FEM) to perform feature summing and differencing on the multilevel features in order to better extract the local detail information, and thus better detect the changes in small regions. In addition, we employ a multilevel attention decoder (MAD) to obtain information in spatial and spectral dimensions, which can effectively fuse global and local information. In experiments, our method performs excellently on CDD, DSIFN-CD, LEVIR-CD, and SYSU-CD datasets, with F1 scores and OA reaching 95.67%∕87.75%∕90.94%∕86.82% and 98.95%∕95.93%∕99.11%∕90.53% respectively.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105294"},"PeriodicalIF":4.2000,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624003998","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Remote sensing image change detection is crucial for natural disaster monitoring and land use change. As the resolution increases, the scenes covered by remote sensing images become more complex, and traditional methods have difficulties in extracting detailed information. With the development of deep learning, the field of change detection has new opportunities. However, existing algorithms mainly focus on the difference analysis between bi-temporal images, while ignoring the semantic information between images, resulting in the global and local information not being able to interact effectively. In this paper, we introduce a new transformer-based multilevel attention network (MATNet), which is capable of extracting multilevel features of global and local information, enabling information interaction and fusion, and thus modeling the global context more effectively. Specifically, we extract multilevel semantic features through the Transformer encoder, and utilize the Feature Enhancement Module (FEM) to perform feature summing and differencing on the multilevel features in order to better extract the local detail information, and thus better detect the changes in small regions. In addition, we employ a multilevel attention decoder (MAD) to obtain information in spatial and spectral dimensions, which can effectively fuse global and local information. In experiments, our method performs excellently on CDD, DSIFN-CD, LEVIR-CD, and SYSU-CD datasets, with F1 scores and OA reaching 95.67%∕87.75%∕90.94%∕86.82% and 98.95%∕95.93%∕99.11%∕90.53% respectively.
MATNet:用于遥感图像变化检测的基于注意力的多级变换器
遥感图像变化检测对于自然灾害监测和土地利用变化至关重要。随着分辨率的提高,遥感图像覆盖的场景变得越来越复杂,传统方法难以提取详细信息。随着深度学习的发展,变化检测领域迎来了新的机遇。然而,现有算法主要关注双时相图像之间的差异分析,而忽略了图像之间的语义信息,导致全局信息和局部信息无法有效交互。本文介绍了一种新的基于变换器的多级注意网络(MATNet),它能够提取全局和局部信息的多级特征,实现信息交互和融合,从而更有效地模拟全局上下文。具体来说,我们通过变换器编码器提取多层次语义特征,并利用特征增强模块(FEM)对多层次特征进行特征求和与差分,以便更好地提取局部细节信息,从而更好地检测小区域的变化。此外,我们还采用了多级注意力解码器(MAD)来获取空间和频谱维度的信息,从而有效地融合全局和局部信息。在实验中,我们的方法在 CDD、DSIFN-CD、LEVIR-CD 和 SYSU-CD 数据集上表现优异,F1 分数和 OA 分别达到 95.67%∕87.75%∕90.94%∕86.82% 和 98.95%∕95.93%∕99.11%∕90.53% 。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信