{"title":"SpectralDiff:一个基于扩散模型的高光谱图像分类生成框架","authors":"Ning Chen;Jun Yue;Leyuan Fang;Shaobo Xia","doi":"10.1109/TGRS.2023.3310023","DOIUrl":null,"url":null,"abstract":"Hyperspectral image (HSI) classification is an important issue in remote sensing field with extensive applications in Earth science. In recent years, a large number of deep learning-based HSI classification methods have been proposed. However, the existing methods have limited ability to handle high-dimensional, highly redundant, and complex data, making it challenging to capture the spectral–spatial distributions of data and relationships between samples. To address this issue, we propose a generative framework for HSI classification with diffusion models (SpectralDiff) that effectively mines the distribution information of high-dimensional and highly redundant data by iteratively denoising and explicitly constructing the data generation process, thus better reflecting the relationships between samples. The framework consists of a spectral–spatial diffusion module and an attention-based classification module. The spectral–spatial diffusion module adopts forward and reverse spectral–spatial diffusion processes to achieve adaptive construction of sample relationships without requiring prior knowledge of graphical structure or neighborhood information. It captures spectral–spatial distribution and contextual information of objects in HSI and mines unsupervised spectral–spatial diffusion features within the reverse diffusion process. Finally, these features are fed into the attention-based classification module for per-pixel classification. The diffusion features can facilitate cross-sample perception via reconstruction distribution, leading to improved classification performance. Experiments on three public HSI datasets demonstrate that the proposed method can achieve better performance than state-of-the-art methods. For the sake of reproducibility, the source code of SpectralDiff will be publicly available at \n<uri>https://github.com/chenning0115/SpectralDiff</uri>\n.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"61 ","pages":"1-16"},"PeriodicalIF":8.6000,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SpectralDiff: A Generative Framework for Hyperspectral Image Classification With Diffusion Models\",\"authors\":\"Ning Chen;Jun Yue;Leyuan Fang;Shaobo Xia\",\"doi\":\"10.1109/TGRS.2023.3310023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hyperspectral image (HSI) classification is an important issue in remote sensing field with extensive applications in Earth science. In recent years, a large number of deep learning-based HSI classification methods have been proposed. However, the existing methods have limited ability to handle high-dimensional, highly redundant, and complex data, making it challenging to capture the spectral–spatial distributions of data and relationships between samples. To address this issue, we propose a generative framework for HSI classification with diffusion models (SpectralDiff) that effectively mines the distribution information of high-dimensional and highly redundant data by iteratively denoising and explicitly constructing the data generation process, thus better reflecting the relationships between samples. The framework consists of a spectral–spatial diffusion module and an attention-based classification module. The spectral–spatial diffusion module adopts forward and reverse spectral–spatial diffusion processes to achieve adaptive construction of sample relationships without requiring prior knowledge of graphical structure or neighborhood information. It captures spectral–spatial distribution and contextual information of objects in HSI and mines unsupervised spectral–spatial diffusion features within the reverse diffusion process. Finally, these features are fed into the attention-based classification module for per-pixel classification. The diffusion features can facilitate cross-sample perception via reconstruction distribution, leading to improved classification performance. Experiments on three public HSI datasets demonstrate that the proposed method can achieve better performance than state-of-the-art methods. For the sake of reproducibility, the source code of SpectralDiff will be publicly available at \\n<uri>https://github.com/chenning0115/SpectralDiff</uri>\\n.\",\"PeriodicalId\":13213,\"journal\":{\"name\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"volume\":\"61 \",\"pages\":\"1-16\"},\"PeriodicalIF\":8.6000,\"publicationDate\":\"2023-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Geoscience and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10234379/\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10234379/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
SpectralDiff: A Generative Framework for Hyperspectral Image Classification With Diffusion Models
Hyperspectral image (HSI) classification is an important issue in remote sensing field with extensive applications in Earth science. In recent years, a large number of deep learning-based HSI classification methods have been proposed. However, the existing methods have limited ability to handle high-dimensional, highly redundant, and complex data, making it challenging to capture the spectral–spatial distributions of data and relationships between samples. To address this issue, we propose a generative framework for HSI classification with diffusion models (SpectralDiff) that effectively mines the distribution information of high-dimensional and highly redundant data by iteratively denoising and explicitly constructing the data generation process, thus better reflecting the relationships between samples. The framework consists of a spectral–spatial diffusion module and an attention-based classification module. The spectral–spatial diffusion module adopts forward and reverse spectral–spatial diffusion processes to achieve adaptive construction of sample relationships without requiring prior knowledge of graphical structure or neighborhood information. It captures spectral–spatial distribution and contextual information of objects in HSI and mines unsupervised spectral–spatial diffusion features within the reverse diffusion process. Finally, these features are fed into the attention-based classification module for per-pixel classification. The diffusion features can facilitate cross-sample perception via reconstruction distribution, leading to improved classification performance. Experiments on three public HSI datasets demonstrate that the proposed method can achieve better performance than state-of-the-art methods. For the sake of reproducibility, the source code of SpectralDiff will be publicly available at
https://github.com/chenning0115/SpectralDiff
.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.