用于高光谱图像分类的多尺度扩张注意力网络

IF 2.8 3区 地球科学 Q2 ASTRONOMY & ASTROPHYSICS
Chao Tu , Wanjun Liu , Wentao Jiang , Linlin Zhao , Tinghao Yan
{"title":"用于高光谱图像分类的多尺度扩张注意力网络","authors":"Chao Tu ,&nbsp;Wanjun Liu ,&nbsp;Wentao Jiang ,&nbsp;Linlin Zhao ,&nbsp;Tinghao Yan","doi":"10.1016/j.asr.2024.08.049","DOIUrl":null,"url":null,"abstract":"<div><div>Hyperspectral imaging is an image obtained by combining spectral detection technology and imaging technology, which can collect electromagnetic spectra in the wavelength range of visible light to near-infrared. It is an important research content in the field of ground observation in hyperspectral remote sensing. However, hyperspectral image face significant challenges in classification task due to their high spectral dimensions, lack of labeled samples, and strong correlation between bands. In order to fully extract features from both spectral and spatial dimensions and improve classification accuracy in the case of limited training samples, a multiscale dilated attention network is proposed for hyperspectral image classification. First, a three-dimensional convolutional layer is used to extract the shallow features of the image. Then, a multiscale dilated attention module is proposed by combining dilated convolution and channel attention. Using ordinary convolution and dilated convolution to form different receptive fields. Channel attention is used to remodel the obtained multiscale features, enhancing the inter-channel correlation. After that, a multiscale spatial-spectral attention module is constructed using multiple asymmetric convolutions to obtain spatial and spectral attention features at different positions, further enhancing important feature suppression over non-important features. Finally, using softmax to classify the obtained features. Using Indian Pines, Pavia University, KSC and University of Houston as experimental datasets, the overall classification accuracy of this paper’s method achieved 98.97%, 99.14%, 99.45%, and 98.56% respectively, using only 5%, 1%, 10%, and 10% of training samples per class. Compared with seven advanced classification methods, the experimental results show that the proposed method can achieve the highest classification accuracy with limited training samples.</div></div>","PeriodicalId":50850,"journal":{"name":"Advances in Space Research","volume":null,"pages":null},"PeriodicalIF":2.8000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A multiscale dilated attention network for hyperspectral image classification\",\"authors\":\"Chao Tu ,&nbsp;Wanjun Liu ,&nbsp;Wentao Jiang ,&nbsp;Linlin Zhao ,&nbsp;Tinghao Yan\",\"doi\":\"10.1016/j.asr.2024.08.049\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Hyperspectral imaging is an image obtained by combining spectral detection technology and imaging technology, which can collect electromagnetic spectra in the wavelength range of visible light to near-infrared. It is an important research content in the field of ground observation in hyperspectral remote sensing. However, hyperspectral image face significant challenges in classification task due to their high spectral dimensions, lack of labeled samples, and strong correlation between bands. In order to fully extract features from both spectral and spatial dimensions and improve classification accuracy in the case of limited training samples, a multiscale dilated attention network is proposed for hyperspectral image classification. First, a three-dimensional convolutional layer is used to extract the shallow features of the image. Then, a multiscale dilated attention module is proposed by combining dilated convolution and channel attention. Using ordinary convolution and dilated convolution to form different receptive fields. Channel attention is used to remodel the obtained multiscale features, enhancing the inter-channel correlation. After that, a multiscale spatial-spectral attention module is constructed using multiple asymmetric convolutions to obtain spatial and spectral attention features at different positions, further enhancing important feature suppression over non-important features. Finally, using softmax to classify the obtained features. Using Indian Pines, Pavia University, KSC and University of Houston as experimental datasets, the overall classification accuracy of this paper’s method achieved 98.97%, 99.14%, 99.45%, and 98.56% respectively, using only 5%, 1%, 10%, and 10% of training samples per class. Compared with seven advanced classification methods, the experimental results show that the proposed method can achieve the highest classification accuracy with limited training samples.</div></div>\",\"PeriodicalId\":50850,\"journal\":{\"name\":\"Advances in Space Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Space Research\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0273117724008731\",\"RegionNum\":3,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ASTRONOMY & ASTROPHYSICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Space Research","FirstCategoryId":"89","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0273117724008731","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ASTRONOMY & ASTROPHYSICS","Score":null,"Total":0}
引用次数: 0

摘要

高光谱成像是将光谱探测技术与成像技术相结合而得到的图像,可以采集从可见光到近红外波长范围内的电磁波谱。它是高光谱遥感中地面观测领域的重要研究内容。然而,由于高光谱图像的光谱维度高、缺乏标注样本以及波段之间的强相关性,高光谱图像在分类任务中面临着巨大的挑战。为了充分提取光谱和空间维度的特征,并在训练样本有限的情况下提高分类精度,本文提出了一种用于高光谱图像分类的多尺度扩张注意力网络。首先,使用三维卷积层提取图像的浅层特征。然后,结合扩张卷积和通道注意,提出了多尺度扩张注意模块。利用普通卷积和扩张卷积形成不同的感受野。通道注意用于重塑获得的多尺度特征,增强通道间的相关性。之后,利用多重非对称卷积构建多尺度空间-频谱注意模块,获得不同位置的空间和频谱注意特征,进一步增强重要特征对非重要特征的抑制。最后,使用 softmax 对获得的特征进行分类。以印第安松树、帕维亚大学、KSC 和休斯顿大学为实验数据集,本文方法的总体分类准确率分别达到了 98.97%、99.14%、99.45% 和 98.56%,每个类别只使用了 5%、1%、10% 和 10%的训练样本。实验结果表明,与七种先进的分类方法相比,本文提出的方法能在有限的训练样本下达到最高的分类准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A multiscale dilated attention network for hyperspectral image classification
Hyperspectral imaging is an image obtained by combining spectral detection technology and imaging technology, which can collect electromagnetic spectra in the wavelength range of visible light to near-infrared. It is an important research content in the field of ground observation in hyperspectral remote sensing. However, hyperspectral image face significant challenges in classification task due to their high spectral dimensions, lack of labeled samples, and strong correlation between bands. In order to fully extract features from both spectral and spatial dimensions and improve classification accuracy in the case of limited training samples, a multiscale dilated attention network is proposed for hyperspectral image classification. First, a three-dimensional convolutional layer is used to extract the shallow features of the image. Then, a multiscale dilated attention module is proposed by combining dilated convolution and channel attention. Using ordinary convolution and dilated convolution to form different receptive fields. Channel attention is used to remodel the obtained multiscale features, enhancing the inter-channel correlation. After that, a multiscale spatial-spectral attention module is constructed using multiple asymmetric convolutions to obtain spatial and spectral attention features at different positions, further enhancing important feature suppression over non-important features. Finally, using softmax to classify the obtained features. Using Indian Pines, Pavia University, KSC and University of Houston as experimental datasets, the overall classification accuracy of this paper’s method achieved 98.97%, 99.14%, 99.45%, and 98.56% respectively, using only 5%, 1%, 10%, and 10% of training samples per class. Compared with seven advanced classification methods, the experimental results show that the proposed method can achieve the highest classification accuracy with limited training samples.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Advances in Space Research
Advances in Space Research 地学天文-地球科学综合
CiteScore
5.20
自引率
11.50%
发文量
800
审稿时长
5.8 months
期刊介绍: The COSPAR publication Advances in Space Research (ASR) is an open journal covering all areas of space research including: space studies of the Earth''s surface, meteorology, climate, the Earth-Moon system, planets and small bodies of the solar system, upper atmospheres, ionospheres and magnetospheres of the Earth and planets including reference atmospheres, space plasmas in the solar system, astrophysics from space, materials sciences in space, fundamental physics in space, space debris, space weather, Earth observations of space phenomena, etc. NB: Please note that manuscripts related to life sciences as related to space are no more accepted for submission to Advances in Space Research. Such manuscripts should now be submitted to the new COSPAR Journal Life Sciences in Space Research (LSSR). All submissions are reviewed by two scientists in the field. COSPAR is an interdisciplinary scientific organization concerned with the progress of space research on an international scale. Operating under the rules of ICSU, COSPAR ignores political considerations and considers all questions solely from the scientific viewpoint.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信