Jiahui Qu, Jingyu Zhao, Wenqian Dong, Jie He, Zan Li, Yunsong Li
{"title":"CDAFormer: Hybrid Transformer-based contrastive domain adaptation framework for unsupervised hyperspectral change detection","authors":"Jiahui Qu, Jingyu Zhao, Wenqian Dong, Jie He, Zan Li, Yunsong Li","doi":"10.1016/j.neunet.2025.107633","DOIUrl":null,"url":null,"abstract":"<div><div>Hyperspectral image (HSI) change detection is a technique used to identify changes between HSIs captured in the same scene at different times. Most of the existing deep learning-based methods can achieve wonderful results, but it is difficult to generalize well to other HSIs with different data distributions. Moreover, it is expensive and laborious to obtain the annotated dataset for model training. To address above issues, we propose a hybrid Transformer-based contrastive domain adaptation (CDAFormer) framework for unsupervised hyperspectral change detection, which can effectively use prior information to improve the detection performance in the absence of labeled training samples by separately aligning the changed and unchanged deference features of two domains. Concretely, the difference features of the two domains are fed into the hybrid Transformer block for preliminary coarse contrastive domain alignment. Then, the positive and negative feature pairs generated from the hybrid Transformer block are prepared for the loss function level fine alignment. Particularly, the domain discrepancy can be bridged by pulling the category-consistent difference feature representation closer and pushing the category-inconsistent difference feature representation far away, so as to maintain the separability between domain invariant difference features. The acquired domain invariant distinguishing features are subsequently fed into the fully connected layers to derive the detection results. Extensive experiments on widely used datasets show that the proposed method can achieve superior performance compared with other state-of-the-art methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107633"},"PeriodicalIF":6.0000,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005131","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Hyperspectral image (HSI) change detection is a technique used to identify changes between HSIs captured in the same scene at different times. Most of the existing deep learning-based methods can achieve wonderful results, but it is difficult to generalize well to other HSIs with different data distributions. Moreover, it is expensive and laborious to obtain the annotated dataset for model training. To address above issues, we propose a hybrid Transformer-based contrastive domain adaptation (CDAFormer) framework for unsupervised hyperspectral change detection, which can effectively use prior information to improve the detection performance in the absence of labeled training samples by separately aligning the changed and unchanged deference features of two domains. Concretely, the difference features of the two domains are fed into the hybrid Transformer block for preliminary coarse contrastive domain alignment. Then, the positive and negative feature pairs generated from the hybrid Transformer block are prepared for the loss function level fine alignment. Particularly, the domain discrepancy can be bridged by pulling the category-consistent difference feature representation closer and pushing the category-inconsistent difference feature representation far away, so as to maintain the separability between domain invariant difference features. The acquired domain invariant distinguishing features are subsequently fed into the fully connected layers to derive the detection results. Extensive experiments on widely used datasets show that the proposed method can achieve superior performance compared with other state-of-the-art methods.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.