{"title":"基于掩模制导和置信度驱动的无监督域自适应高光谱跨场景分类","authors":"Ying Cui;Longyu Zhu;Liguo Wang;Shan Gao;Chunhui Zhao","doi":"10.1109/LGRS.2025.3589677","DOIUrl":null,"url":null,"abstract":"Hyperspectral image (HSI) classification holds great potential for practical applications, but its widespread adoption is limited by the high cost of manual annotation. While unsupervised domain adaptation (UDA) offers a solution by transferring knowledge from labeled source domains (SDs) to unlabeled target domains (TDs), existing methods primarily focus on statistical-level distribution alignment, neglecting instance-level variations in TD data. In addition, for the interfering information such as noise and redundancy that are prevalent in HSI, there are few methods to consider processing the original data at the point level. To overcome these limitations, we propose a mask-guided and confidence-driven UDA (MCUDA) method. It introduces point-level learnable masks to dynamically optimize the input HSI data cube, effectively suppressing interference and enhancing domain-invariant feature extraction. It also proposes a pseudolabel sample set generation strategy based on the idea of confident learning, which takes into account the instance-level differences and domain-related information of TD data. Comprehensive experiments on two cross-scene datasets demonstrate that MCUDA outperforms existing UDA methods, achieving superior classification accuracy.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mask-Guided and Confidence-Driven Unsupervised Domain Adaptation for Hyperspectral Cross-Scene Classification\",\"authors\":\"Ying Cui;Longyu Zhu;Liguo Wang;Shan Gao;Chunhui Zhao\",\"doi\":\"10.1109/LGRS.2025.3589677\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hyperspectral image (HSI) classification holds great potential for practical applications, but its widespread adoption is limited by the high cost of manual annotation. While unsupervised domain adaptation (UDA) offers a solution by transferring knowledge from labeled source domains (SDs) to unlabeled target domains (TDs), existing methods primarily focus on statistical-level distribution alignment, neglecting instance-level variations in TD data. In addition, for the interfering information such as noise and redundancy that are prevalent in HSI, there are few methods to consider processing the original data at the point level. To overcome these limitations, we propose a mask-guided and confidence-driven UDA (MCUDA) method. It introduces point-level learnable masks to dynamically optimize the input HSI data cube, effectively suppressing interference and enhancing domain-invariant feature extraction. It also proposes a pseudolabel sample set generation strategy based on the idea of confident learning, which takes into account the instance-level differences and domain-related information of TD data. Comprehensive experiments on two cross-scene datasets demonstrate that MCUDA outperforms existing UDA methods, achieving superior classification accuracy.\",\"PeriodicalId\":91017,\"journal\":{\"name\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"volume\":\"22 \",\"pages\":\"1-5\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2025-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11082262/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11082262/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mask-Guided and Confidence-Driven Unsupervised Domain Adaptation for Hyperspectral Cross-Scene Classification
Hyperspectral image (HSI) classification holds great potential for practical applications, but its widespread adoption is limited by the high cost of manual annotation. While unsupervised domain adaptation (UDA) offers a solution by transferring knowledge from labeled source domains (SDs) to unlabeled target domains (TDs), existing methods primarily focus on statistical-level distribution alignment, neglecting instance-level variations in TD data. In addition, for the interfering information such as noise and redundancy that are prevalent in HSI, there are few methods to consider processing the original data at the point level. To overcome these limitations, we propose a mask-guided and confidence-driven UDA (MCUDA) method. It introduces point-level learnable masks to dynamically optimize the input HSI data cube, effectively suppressing interference and enhancing domain-invariant feature extraction. It also proposes a pseudolabel sample set generation strategy based on the idea of confident learning, which takes into account the instance-level differences and domain-related information of TD data. Comprehensive experiments on two cross-scene datasets demonstrate that MCUDA outperforms existing UDA methods, achieving superior classification accuracy.