Semi-supervised Classification of Hyperspectral Image through Deep Encoder-Decoder and Graph Neural Networks

Refka Hanachi, A. Sellami, I. Farah, M. Mura
{"title":"Semi-supervised Classification of Hyperspectral Image through Deep Encoder-Decoder and Graph Neural Networks","authors":"Refka Hanachi, A. Sellami, I. Farah, M. Mura","doi":"10.1109/ICOTEN52080.2021.9493562","DOIUrl":null,"url":null,"abstract":"The hyperspectral image (HSI) classification is a challenging task due to the high dimensional spectral feature space, and a low number of labeled training samples. To overcome these issues, we propose a novel methodology for HSI classification, called DAE-GCN, which is based on deep neural networks. The main goal is to preserve both spectral and spatial features in the classification task by using only a few number of labeled training samples. Firstly, we propose a deep autoencoder (DAE) model, which learns to extract relevant features from the HSI. It seeks to find a better representation of the HSI in order to improve the classification rates. Secondly, we construct a spectral-spatial graph using the obtained latent representation space. The aim is to take into account the spectral and spatial features by considering distances between neighboring pixels. Finally, a semi-supervised graph convolutional network (GCN) is trained based on the latent representation space to perform the spectral-spatial classification of HSI. The main advantage of the proposed method is to allow the automatic extraction of relevant information while preserving the spatial and spectral features of data, and improve the classification of hyperspectral images even when the number of labeled samples is low. Experiments are conducted on two real HSIs, including Indian Pines, and Pavia University datasets. Experimental results show that the proposed model DAE-GCN is competitive in classification performances compared to various state-of-the-art methods.","PeriodicalId":308802,"journal":{"name":"2021 International Congress of Advanced Technology and Engineering (ICOTEN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Congress of Advanced Technology and Engineering (ICOTEN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOTEN52080.2021.9493562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

The hyperspectral image (HSI) classification is a challenging task due to the high dimensional spectral feature space, and a low number of labeled training samples. To overcome these issues, we propose a novel methodology for HSI classification, called DAE-GCN, which is based on deep neural networks. The main goal is to preserve both spectral and spatial features in the classification task by using only a few number of labeled training samples. Firstly, we propose a deep autoencoder (DAE) model, which learns to extract relevant features from the HSI. It seeks to find a better representation of the HSI in order to improve the classification rates. Secondly, we construct a spectral-spatial graph using the obtained latent representation space. The aim is to take into account the spectral and spatial features by considering distances between neighboring pixels. Finally, a semi-supervised graph convolutional network (GCN) is trained based on the latent representation space to perform the spectral-spatial classification of HSI. The main advantage of the proposed method is to allow the automatic extraction of relevant information while preserving the spatial and spectral features of data, and improve the classification of hyperspectral images even when the number of labeled samples is low. Experiments are conducted on two real HSIs, including Indian Pines, and Pavia University datasets. Experimental results show that the proposed model DAE-GCN is competitive in classification performances compared to various state-of-the-art methods.
基于深度编解码器和图神经网络的高光谱图像半监督分类
由于光谱特征空间高维,且标记训练样本数量少,高光谱图像分类是一项具有挑战性的任务。为了克服这些问题,我们提出了一种新的HSI分类方法,称为DAE-GCN,它基于深度神经网络。主要目标是通过使用少量的标记训练样本来保留分类任务中的光谱和空间特征。首先,我们提出了一个深度自编码器(DAE)模型,该模型学习从HSI中提取相关特征。它试图找到一个更好的代表恒生指数,以提高分类率。其次,利用得到的潜在表示空间构造光谱空间图。目的是通过考虑相邻像素之间的距离来考虑光谱和空间特征。最后,基于潜在表示空间训练半监督图卷积网络(GCN),对HSI进行频谱空间分类。该方法的主要优点是在保留数据空间和光谱特征的同时,能够自动提取相关信息,并且在标记样本数量较少的情况下,也能提高高光谱图像的分类能力。在Indian Pines和Pavia University两个真实的hsi数据集上进行了实验。实验结果表明,所提出的DAE-GCN模型在分类性能上与现有的各种方法相比具有一定的竞争力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信