PCET: Patch Confidence-Enhanced Transformer with efficient spectral–spatial features for hyperspectral image classification

IF 7.5 1区 地球科学 Q1 Earth and Planetary Sciences
Li Fang, Xuanli Lan, Tianyu Li, Huifang Shen
{"title":"PCET: Patch Confidence-Enhanced Transformer with efficient spectral–spatial features for hyperspectral image classification","authors":"Li Fang, Xuanli Lan, Tianyu Li, Huifang Shen","doi":"10.1016/j.jag.2024.104308","DOIUrl":null,"url":null,"abstract":"Hyperspectral image (HSI) classification based on deep learning has demonstrated promising performance. In general, using patch-wise samples helps to extract the spatial relationship between pixels and local contextual information. However, the presence of background or other category information in an image patch that is inconsistent with the central target category has a negative effect on classification. To solve this issue, a patch confidence-enhanced transformer (PCET) approach for HSI classification is proposed. To be specific, we design a patch quality assessment (PQA) branch model to evaluate the input patches during training process, which effectively filters out the intrusive non-central pixels. The output confidence of the branch model serves as a quantitative indicator of the contribution degree of the input patch to the overall training efficacy, which is subsequently weighted in the loss function, thereby endowing the model with the capability to dynamically adjust its learning focus based on the qualitative of the inputs. Second, a spectral–spatial multi-feature fusion (SSMF) module is devised to procure scores of representative information simultaneously and fully exploit the potential of multi-scale feature HSI data. Finally, to enhance feature discrimination, global context is efficiently modeled using the efficient additive attention transformer (<mml:math altimg=\"si4.svg\" display=\"inline\"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant=\"normal\">EA</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant=\"normal\">T</mml:mi></mml:mrow></mml:math>) module, which streamlines the attention process and allows the model to learn efficient and robust global representations for accurate classification of the central pixel. A series of experimental results executed on real HSI datasets have substantiated that the proposed PCET can achieve outstanding performance, even when only 10 samples per category are used for training.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"32 1","pages":""},"PeriodicalIF":7.5000,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Applied Earth Observation and Geoinformation","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.1016/j.jag.2024.104308","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Earth and Planetary Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

Hyperspectral image (HSI) classification based on deep learning has demonstrated promising performance. In general, using patch-wise samples helps to extract the spatial relationship between pixels and local contextual information. However, the presence of background or other category information in an image patch that is inconsistent with the central target category has a negative effect on classification. To solve this issue, a patch confidence-enhanced transformer (PCET) approach for HSI classification is proposed. To be specific, we design a patch quality assessment (PQA) branch model to evaluate the input patches during training process, which effectively filters out the intrusive non-central pixels. The output confidence of the branch model serves as a quantitative indicator of the contribution degree of the input patch to the overall training efficacy, which is subsequently weighted in the loss function, thereby endowing the model with the capability to dynamically adjust its learning focus based on the qualitative of the inputs. Second, a spectral–spatial multi-feature fusion (SSMF) module is devised to procure scores of representative information simultaneously and fully exploit the potential of multi-scale feature HSI data. Finally, to enhance feature discrimination, global context is efficiently modeled using the efficient additive attention transformer (EA2T) module, which streamlines the attention process and allows the model to learn efficient and robust global representations for accurate classification of the central pixel. A series of experimental results executed on real HSI datasets have substantiated that the proposed PCET can achieve outstanding performance, even when only 10 samples per category are used for training.
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.20
自引率
8.00%
发文量
49
审稿时长
7.2 months
期刊介绍: The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信