基于双分支注意网络的三维八度卷积高光谱图像分类

Ling Xu, Guo Cao, Lin Deng, Lanwei Ding, Hao Xu, Qikun Pan
{"title":"基于双分支注意网络的三维八度卷积高光谱图像分类","authors":"Ling Xu, Guo Cao, Lin Deng, Lanwei Ding, Hao Xu, Qikun Pan","doi":"10.1117/12.2644256","DOIUrl":null,"url":null,"abstract":"Hyperspectral Image (HSI) classification aims to assign each hyperspectral pixel with an appropriate land-cover category. In recent years, deep learning (DL) has received attention from a growing number of researchers. Hyperspectral image classification methods based on DL have shown admirable performance, but there is still room for improvement in terms of exploratory capabilities in spatial and spectral dimensions. To improve classification accuracy and reduce training samples, we propose a double branch attention network (OCDAN) based on 3-D octave convolution and dense block. Especially, we first use a 3-D octave convolution model and dense block to extract spatial features and spectral features respectively. Furthermore, a spatial attention module and a spectral attention module are implemented to highlight more discriminative information. Then the extracted features are fused for classification. Compared with the state-of-the-art methods, the proposed framework can achieve superior performance on two hyperspectral datasets, especially when the training samples are signally lacking. In addition, ablation experiments are utilized to validate the role of each part of the network.","PeriodicalId":314555,"journal":{"name":"International Conference on Digital Image Processing","volume":"166 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hyperspectral image classification based on dual-branch attention network with 3-D octave convolution\",\"authors\":\"Ling Xu, Guo Cao, Lin Deng, Lanwei Ding, Hao Xu, Qikun Pan\",\"doi\":\"10.1117/12.2644256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hyperspectral Image (HSI) classification aims to assign each hyperspectral pixel with an appropriate land-cover category. In recent years, deep learning (DL) has received attention from a growing number of researchers. Hyperspectral image classification methods based on DL have shown admirable performance, but there is still room for improvement in terms of exploratory capabilities in spatial and spectral dimensions. To improve classification accuracy and reduce training samples, we propose a double branch attention network (OCDAN) based on 3-D octave convolution and dense block. Especially, we first use a 3-D octave convolution model and dense block to extract spatial features and spectral features respectively. Furthermore, a spatial attention module and a spectral attention module are implemented to highlight more discriminative information. Then the extracted features are fused for classification. Compared with the state-of-the-art methods, the proposed framework can achieve superior performance on two hyperspectral datasets, especially when the training samples are signally lacking. In addition, ablation experiments are utilized to validate the role of each part of the network.\",\"PeriodicalId\":314555,\"journal\":{\"name\":\"International Conference on Digital Image Processing\",\"volume\":\"166 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Digital Image Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2644256\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Digital Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2644256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

高光谱图像(HSI)分类旨在为每个高光谱像元分配适当的土地覆盖类别。近年来,深度学习受到了越来越多研究者的关注。基于深度学习的高光谱图像分类方法已经表现出令人钦佩的性能,但在空间和光谱维度上的探索能力仍有待提高。为了提高分类精度和减少训练样本,提出了一种基于三维八度卷积和密集块的双分支关注网络(OCDAN)。首先利用三维八度卷积模型和密集块分别提取空间特征和光谱特征。此外,还实现了空间注意模块和频谱注意模块,以突出更多的判别信息。然后对提取的特征进行融合分类。与现有的方法相比,本文提出的框架在两个高光谱数据集上取得了更好的性能,特别是在训练样本信号缺乏的情况下。此外,利用烧蚀实验验证了网络各部分的作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Hyperspectral image classification based on dual-branch attention network with 3-D octave convolution
Hyperspectral Image (HSI) classification aims to assign each hyperspectral pixel with an appropriate land-cover category. In recent years, deep learning (DL) has received attention from a growing number of researchers. Hyperspectral image classification methods based on DL have shown admirable performance, but there is still room for improvement in terms of exploratory capabilities in spatial and spectral dimensions. To improve classification accuracy and reduce training samples, we propose a double branch attention network (OCDAN) based on 3-D octave convolution and dense block. Especially, we first use a 3-D octave convolution model and dense block to extract spatial features and spectral features respectively. Furthermore, a spatial attention module and a spectral attention module are implemented to highlight more discriminative information. Then the extracted features are fused for classification. Compared with the state-of-the-art methods, the proposed framework can achieve superior performance on two hyperspectral datasets, especially when the training samples are signally lacking. In addition, ablation experiments are utilized to validate the role of each part of the network.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信