{"title":"Combining Contrastive Learning and Diffusion Model for Hyperspectral Image Classification","authors":"Xiaorun Li;Jinhui Li;Shuhan Chen;Zeyu Cao","doi":"10.1109/LGRS.2025.3601152","DOIUrl":null,"url":null,"abstract":"In recent years, self-supervised learning has made significant strides in hyperspectral image classification (HSIC). However, different approaches come with distinct strengths and limitations. Contrastive learning excels at extracting key information from large volumes of redundant data, but its training objective can inadvertently increase intraclass feature distance. To address this limitation, we leverage diffusion models (DMs) for their proven ability to refine and aggregate features by modeling complex data distributions. Specifically, DMs’ inherent denoising and generative processes are theoretically well-suited to enhance intraclass compactness by learning to reconstruct clean, representative features from perturbed inputs. We propose the new method—ContrastDM. This approach generates synthetic features, improving and enriching feature representation, and partially addressing the issue of sample sparsity. Classification experiments on three publicly available datasets demonstrate that ContrastDM significantly outperforms state-of-the-art methods.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11133435/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, self-supervised learning has made significant strides in hyperspectral image classification (HSIC). However, different approaches come with distinct strengths and limitations. Contrastive learning excels at extracting key information from large volumes of redundant data, but its training objective can inadvertently increase intraclass feature distance. To address this limitation, we leverage diffusion models (DMs) for their proven ability to refine and aggregate features by modeling complex data distributions. Specifically, DMs’ inherent denoising and generative processes are theoretically well-suited to enhance intraclass compactness by learning to reconstruct clean, representative features from perturbed inputs. We propose the new method—ContrastDM. This approach generates synthetic features, improving and enriching feature representation, and partially addressing the issue of sample sparsity. Classification experiments on three publicly available datasets demonstrate that ContrastDM significantly outperforms state-of-the-art methods.