基于学习图网的高光谱图像分类跨场景关系挖掘

Junbin Chen, Minchao Ye, Huijuan Lu, Ling Lei
{"title":"基于学习图网的高光谱图像分类跨场景关系挖掘","authors":"Junbin Chen, Minchao Ye, Huijuan Lu, Ling Lei","doi":"10.1109/ICAICE54393.2021.00106","DOIUrl":null,"url":null,"abstract":"The problem of hyperspectral image (HSI) classification is usually accompanied by the problem of high dimension and few samples, that is, the high-dimensional-small-sample-size problem. In recent years, transfer learning has been widely used to solve this problem. In the cross-scene HSI classification, we consider a scene with a rich number of samples (called source scene) and a scene with a small number of samples (called target scene). The idea of transfer learning is to transfer the knowledge contained in the rich samples of source scene to target scene. Many HSI classification methods assume that two scenes come from the same feature space. However, the facts are often unsatisfactory, and the two scenes are likely to come from different feature spaces. In this case, we proposed a heterogeneous transfer learning method named cross-domain variational autoencoder (CDVAE), which achieved good results. But the imperfection is that CDVAE cannot use unlabeled samples on target scene to help classification. Therefore, on this basis, we have proposed a learning graph net (LGnet) of using convolutional neural networks (CNN) and graph to learn the relationship between cross-scene samples, so as to use the potential information of unlabeled samples. Then, a new method cross-domain variational autoencoder with learned graph (CDVAE-LG) was proposed by combining LGnet with CDVAE. The experimental results show that CDVAE-LG can effectively learn the information between cross-scene samples and help classification.","PeriodicalId":388444,"journal":{"name":"2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Cross-Scene Relationship Mining with Learning Graph Net for Hyperspectral Image Classification\",\"authors\":\"Junbin Chen, Minchao Ye, Huijuan Lu, Ling Lei\",\"doi\":\"10.1109/ICAICE54393.2021.00106\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The problem of hyperspectral image (HSI) classification is usually accompanied by the problem of high dimension and few samples, that is, the high-dimensional-small-sample-size problem. In recent years, transfer learning has been widely used to solve this problem. In the cross-scene HSI classification, we consider a scene with a rich number of samples (called source scene) and a scene with a small number of samples (called target scene). The idea of transfer learning is to transfer the knowledge contained in the rich samples of source scene to target scene. Many HSI classification methods assume that two scenes come from the same feature space. However, the facts are often unsatisfactory, and the two scenes are likely to come from different feature spaces. In this case, we proposed a heterogeneous transfer learning method named cross-domain variational autoencoder (CDVAE), which achieved good results. But the imperfection is that CDVAE cannot use unlabeled samples on target scene to help classification. Therefore, on this basis, we have proposed a learning graph net (LGnet) of using convolutional neural networks (CNN) and graph to learn the relationship between cross-scene samples, so as to use the potential information of unlabeled samples. Then, a new method cross-domain variational autoencoder with learned graph (CDVAE-LG) was proposed by combining LGnet with CDVAE. The experimental results show that CDVAE-LG can effectively learn the information between cross-scene samples and help classification.\",\"PeriodicalId\":388444,\"journal\":{\"name\":\"2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAICE54393.2021.00106\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAICE54393.2021.00106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

高光谱图像(HSI)分类问题通常伴随着高维少样本问题,即高维小样本问题。近年来,迁移学习被广泛应用于解决这一问题。在跨场景HSI分类中,我们考虑样本数量丰富的场景(称为源场景)和样本数量较少的场景(称为目标场景)。迁移学习的思想是将丰富的源场景样本中包含的知识迁移到目标场景中。许多HSI分类方法假设两个场景来自相同的特征空间。然而,事实往往不能令人满意,两个场景很可能来自不同的特征空间。在这种情况下,我们提出了一种名为跨域变分自编码器(cross-domain variational autoencoder, CDVAE)的异构迁移学习方法,取得了良好的效果。但不足之处是CDVAE不能使用目标场景上未标记的样本来帮助分类。因此,在此基础上,我们提出了一种学习图网(LGnet),利用卷积神经网络(CNN)和图来学习跨场景样本之间的关系,从而利用未标记样本的潜在信息。然后,将LGnet与CDVAE相结合,提出了一种基于学习图的跨域变分自编码器(CDVAE- lg)。实验结果表明,CDVAE-LG可以有效地学习跨场景样本之间的信息,帮助分类。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cross-Scene Relationship Mining with Learning Graph Net for Hyperspectral Image Classification
The problem of hyperspectral image (HSI) classification is usually accompanied by the problem of high dimension and few samples, that is, the high-dimensional-small-sample-size problem. In recent years, transfer learning has been widely used to solve this problem. In the cross-scene HSI classification, we consider a scene with a rich number of samples (called source scene) and a scene with a small number of samples (called target scene). The idea of transfer learning is to transfer the knowledge contained in the rich samples of source scene to target scene. Many HSI classification methods assume that two scenes come from the same feature space. However, the facts are often unsatisfactory, and the two scenes are likely to come from different feature spaces. In this case, we proposed a heterogeneous transfer learning method named cross-domain variational autoencoder (CDVAE), which achieved good results. But the imperfection is that CDVAE cannot use unlabeled samples on target scene to help classification. Therefore, on this basis, we have proposed a learning graph net (LGnet) of using convolutional neural networks (CNN) and graph to learn the relationship between cross-scene samples, so as to use the potential information of unlabeled samples. Then, a new method cross-domain variational autoencoder with learned graph (CDVAE-LG) was proposed by combining LGnet with CDVAE. The experimental results show that CDVAE-LG can effectively learn the information between cross-scene samples and help classification.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信