{"title":"半监督高光谱图像分类的智能体图注意增强变压器","authors":"Mingmei Zhang , Jiajie Wang , Yongan Xue , Jinling Zhao","doi":"10.1016/j.optlastec.2025.114009","DOIUrl":null,"url":null,"abstract":"<div><div>Hyperspectral image classification (HSIC) is a key topic in remote sensing research, but acquiring a sufficient quantity of high-quality labeled samples is costly and, in some cases, unattainable. We propose a Semi-Supervised classification network that integrates Agent Graph Attention with a Transformer (AGAT-SS) to fully exploit both labelled and unlabelled samples to improve HSIC performance. The network is composed of three core components: a Feature Alignment Module (FAM), an Agent Graph Attention Network (A-GAT), and an Agent-Enhanced Feed-forward Transformer (AEF-Transformer). FAM employs channel attention and multi-scale convolutions with the objective of enhancing the consistency between labelled and unlabelled data. This process establishes a reliable foundation for subsequent feature extraction. A-GAT introduces an agent-attention mechanism that jointly captures global and local features while markedly reducing computational complexity, yielding efficient and robust feature learning. AEF-Transformer combines Agent Attention with an Enhanced Feed-forward Module (AEFM), thereby substantially strengthening feature-extraction capacity and model expressiveness. Extensive experiments on the public Indian Pines (IP), Pavia University (PU) and Houston 2013 (HU13) datasets indicate that AGAT-SS significantly outperforms excellent algorithms such as Multiscale Dynamic Graph Convolutional Network (MDGCN), Multiscale Spectral–Spatial GAT (MSSGAT), and Dynamic Evolution GAT (DEGAT). In particular, on the IP dataset, when only 5 % of the labeled samples and 20 % of the unlabeled samples were used for training, AGAT-SS outperformed DEGAT by 1.16 % in Overall Accuracy (OA), 1.36 % in Average Accuracy (AA), and 0.91 % in Kappa coefficient. These gains further confirm its superiority in semi-supervised learning.</div></div>","PeriodicalId":19511,"journal":{"name":"Optics and Laser Technology","volume":"192 ","pages":"Article 114009"},"PeriodicalIF":5.0000,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An agent graph-attention–enhanced transformer for semi-supervised hyperspectral image classification\",\"authors\":\"Mingmei Zhang , Jiajie Wang , Yongan Xue , Jinling Zhao\",\"doi\":\"10.1016/j.optlastec.2025.114009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Hyperspectral image classification (HSIC) is a key topic in remote sensing research, but acquiring a sufficient quantity of high-quality labeled samples is costly and, in some cases, unattainable. We propose a Semi-Supervised classification network that integrates Agent Graph Attention with a Transformer (AGAT-SS) to fully exploit both labelled and unlabelled samples to improve HSIC performance. The network is composed of three core components: a Feature Alignment Module (FAM), an Agent Graph Attention Network (A-GAT), and an Agent-Enhanced Feed-forward Transformer (AEF-Transformer). FAM employs channel attention and multi-scale convolutions with the objective of enhancing the consistency between labelled and unlabelled data. This process establishes a reliable foundation for subsequent feature extraction. A-GAT introduces an agent-attention mechanism that jointly captures global and local features while markedly reducing computational complexity, yielding efficient and robust feature learning. AEF-Transformer combines Agent Attention with an Enhanced Feed-forward Module (AEFM), thereby substantially strengthening feature-extraction capacity and model expressiveness. Extensive experiments on the public Indian Pines (IP), Pavia University (PU) and Houston 2013 (HU13) datasets indicate that AGAT-SS significantly outperforms excellent algorithms such as Multiscale Dynamic Graph Convolutional Network (MDGCN), Multiscale Spectral–Spatial GAT (MSSGAT), and Dynamic Evolution GAT (DEGAT). In particular, on the IP dataset, when only 5 % of the labeled samples and 20 % of the unlabeled samples were used for training, AGAT-SS outperformed DEGAT by 1.16 % in Overall Accuracy (OA), 1.36 % in Average Accuracy (AA), and 0.91 % in Kappa coefficient. These gains further confirm its superiority in semi-supervised learning.</div></div>\",\"PeriodicalId\":19511,\"journal\":{\"name\":\"Optics and Laser Technology\",\"volume\":\"192 \",\"pages\":\"Article 114009\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Laser Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0030399225016007\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Laser Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030399225016007","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
An agent graph-attention–enhanced transformer for semi-supervised hyperspectral image classification
Hyperspectral image classification (HSIC) is a key topic in remote sensing research, but acquiring a sufficient quantity of high-quality labeled samples is costly and, in some cases, unattainable. We propose a Semi-Supervised classification network that integrates Agent Graph Attention with a Transformer (AGAT-SS) to fully exploit both labelled and unlabelled samples to improve HSIC performance. The network is composed of three core components: a Feature Alignment Module (FAM), an Agent Graph Attention Network (A-GAT), and an Agent-Enhanced Feed-forward Transformer (AEF-Transformer). FAM employs channel attention and multi-scale convolutions with the objective of enhancing the consistency between labelled and unlabelled data. This process establishes a reliable foundation for subsequent feature extraction. A-GAT introduces an agent-attention mechanism that jointly captures global and local features while markedly reducing computational complexity, yielding efficient and robust feature learning. AEF-Transformer combines Agent Attention with an Enhanced Feed-forward Module (AEFM), thereby substantially strengthening feature-extraction capacity and model expressiveness. Extensive experiments on the public Indian Pines (IP), Pavia University (PU) and Houston 2013 (HU13) datasets indicate that AGAT-SS significantly outperforms excellent algorithms such as Multiscale Dynamic Graph Convolutional Network (MDGCN), Multiscale Spectral–Spatial GAT (MSSGAT), and Dynamic Evolution GAT (DEGAT). In particular, on the IP dataset, when only 5 % of the labeled samples and 20 % of the unlabeled samples were used for training, AGAT-SS outperformed DEGAT by 1.16 % in Overall Accuracy (OA), 1.36 % in Average Accuracy (AA), and 0.91 % in Kappa coefficient. These gains further confirm its superiority in semi-supervised learning.
期刊介绍:
Optics & Laser Technology aims to provide a vehicle for the publication of a broad range of high quality research and review papers in those fields of scientific and engineering research appertaining to the development and application of the technology of optics and lasers. Papers describing original work in these areas are submitted to rigorous refereeing prior to acceptance for publication.
The scope of Optics & Laser Technology encompasses, but is not restricted to, the following areas:
•development in all types of lasers
•developments in optoelectronic devices and photonics
•developments in new photonics and optical concepts
•developments in conventional optics, optical instruments and components
•techniques of optical metrology, including interferometry and optical fibre sensors
•LIDAR and other non-contact optical measurement techniques, including optical methods in heat and fluid flow
•applications of lasers to materials processing, optical NDT display (including holography) and optical communication
•research and development in the field of laser safety including studies of hazards resulting from the applications of lasers (laser safety, hazards of laser fume)
•developments in optical computing and optical information processing
•developments in new optical materials
•developments in new optical characterization methods and techniques
•developments in quantum optics
•developments in light assisted micro and nanofabrication methods and techniques
•developments in nanophotonics and biophotonics
•developments in imaging processing and systems