Wuli Wang , Qi Sun , Li Zhang , Peng Ren , Jianbu Wang , Guangbo Ren , Baodi Liu
{"title":"A spatial–spectral fusion convolutional transformer network with contextual multi-head self-attention for hyperspectral image classification","authors":"Wuli Wang , Qi Sun , Li Zhang , Peng Ren , Jianbu Wang , Guangbo Ren , Baodi Liu","doi":"10.1016/j.neunet.2025.107350","DOIUrl":null,"url":null,"abstract":"<div><div>Convolutional neural networks (CNNs) can effectively extract local features, while Vision Transformer excels at capturing global features. Combining these two networks to enhance the classification performance of hyperspectral images (HSI) has garnered significant attention. However, most existing fusion methods introduce inductive biases for the Transformer by directly connecting convolutional modules and Transformer encoders for feature extraction but rarely enhance the Transformer’s ability to extract local contextual information through convolutional embedding. In this paper, we propose a spatial–spectral fusion convolutional Transformer method (SSFCT) with contextual multi-head self-attention (CMHSA) for HSI classification. Specifically, we first designed a local feature aggregation (LFA) module that utilizes a three-branch convolution architecture and attention layers to extract and enhance local spatial–spectral fusion features. Then, a novel CMHSA is built to extract interaction information of local contextual features through integrating static and dynamic local contextual representations from 3D convolution and attention mechanisms, and the CMHSA is integrated into the devised dual-branch spatial–spectral convolutional transformer (DSSCT) module to simultaneously capture global–local associations in both spatial and spectral domains. Finally, the attention feature fusion (AFF) module is proposed to fully obtain global–local spatial–spectral comprehensive features. Extensive experiments on five HSI datasets — Indian Pines, Salinas Valley, Houston2013, Botswana, and Yellow River Delta — outperform state-of-the-art methods, achieving overall accuracies of 98.03%, 99.68%, 98.65%, 97.97%, and 89.43%, respectively, showcasing its effectiveness for HSI classification.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107350"},"PeriodicalIF":6.0000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025002291","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Convolutional neural networks (CNNs) can effectively extract local features, while Vision Transformer excels at capturing global features. Combining these two networks to enhance the classification performance of hyperspectral images (HSI) has garnered significant attention. However, most existing fusion methods introduce inductive biases for the Transformer by directly connecting convolutional modules and Transformer encoders for feature extraction but rarely enhance the Transformer’s ability to extract local contextual information through convolutional embedding. In this paper, we propose a spatial–spectral fusion convolutional Transformer method (SSFCT) with contextual multi-head self-attention (CMHSA) for HSI classification. Specifically, we first designed a local feature aggregation (LFA) module that utilizes a three-branch convolution architecture and attention layers to extract and enhance local spatial–spectral fusion features. Then, a novel CMHSA is built to extract interaction information of local contextual features through integrating static and dynamic local contextual representations from 3D convolution and attention mechanisms, and the CMHSA is integrated into the devised dual-branch spatial–spectral convolutional transformer (DSSCT) module to simultaneously capture global–local associations in both spatial and spectral domains. Finally, the attention feature fusion (AFF) module is proposed to fully obtain global–local spatial–spectral comprehensive features. Extensive experiments on five HSI datasets — Indian Pines, Salinas Valley, Houston2013, Botswana, and Yellow River Delta — outperform state-of-the-art methods, achieving overall accuracies of 98.03%, 99.68%, 98.65%, 97.97%, and 89.43%, respectively, showcasing its effectiveness for HSI classification.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.