{"title":"Content-Biased and Style-Assisted Transfer Network for Cross-Scene Hyperspectral Image Classification","authors":"Zuowei Shi;Xudong Lai;Juan Deng;Jinshuo Liu","doi":"10.1109/TGRS.2024.3458014","DOIUrl":null,"url":null,"abstract":"Cross-scene hyperspectral image (HSI) classification remains a challenging task due to the distribution discrepancies that arise from variations in imaging sensors, geographic regions, atmospheric conditions, and other factors between the source and target domains. Recent research indicates that convolutional neural networks (CNNs) exhibit a significant tendency to prioritize image styles, which are highly sensitive to domain variations, over the actual content of the images. However, few existing domain adaptation (DA) methods for cross-scene HSI classification take into consideration the style variations both within the samples of an HSI and between the cross-scene source and target domains. Accordingly, we propose a novel content-biased and style-assisted transfer network (CSTnet) for unsupervised DA (UDA) in cross-scene HSI classification. The CSTnet introduces a content and style reorganization (CSR) module that disentangles content features from style features via instance normalization (IN), while refining useful style information as a complementary component to enhance discriminability. A contentwise reorganization loss is designed to reduce the disparity between the separated content/style representations and the output features, thereby enhancing content-level alignment across different domains. Furthermore, we incorporate batch nuclear-norm maximization (BNM) as an effective class-balancing technique that directly exploits unlabeled target data to enhance minority class representations without requiring prior knowledge or pseudolabels, achieving better distribution alignment. Comprehensive experiments on three cross-scene HSI datasets demonstrate that the proposed CSTnet achieves state-of-the-art performance, effectively leveraging content bias and style assistance for robust DA in cross-scene HSI classification tasks. The code is available at: \n<uri>https://github.com/nbdszw/CSTnet</uri>\n.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10678753/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Cross-scene hyperspectral image (HSI) classification remains a challenging task due to the distribution discrepancies that arise from variations in imaging sensors, geographic regions, atmospheric conditions, and other factors between the source and target domains. Recent research indicates that convolutional neural networks (CNNs) exhibit a significant tendency to prioritize image styles, which are highly sensitive to domain variations, over the actual content of the images. However, few existing domain adaptation (DA) methods for cross-scene HSI classification take into consideration the style variations both within the samples of an HSI and between the cross-scene source and target domains. Accordingly, we propose a novel content-biased and style-assisted transfer network (CSTnet) for unsupervised DA (UDA) in cross-scene HSI classification. The CSTnet introduces a content and style reorganization (CSR) module that disentangles content features from style features via instance normalization (IN), while refining useful style information as a complementary component to enhance discriminability. A contentwise reorganization loss is designed to reduce the disparity between the separated content/style representations and the output features, thereby enhancing content-level alignment across different domains. Furthermore, we incorporate batch nuclear-norm maximization (BNM) as an effective class-balancing technique that directly exploits unlabeled target data to enhance minority class representations without requiring prior knowledge or pseudolabels, achieving better distribution alignment. Comprehensive experiments on three cross-scene HSI datasets demonstrate that the proposed CSTnet achieves state-of-the-art performance, effectively leveraging content bias and style assistance for robust DA in cross-scene HSI classification tasks. The code is available at:
https://github.com/nbdszw/CSTnet
.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.