{"title":"Hyperspectral unmixing with spatial context and endmember ensemble learning with attention mechanism","authors":"R.M.K.L. Ratnayake, D.M.U.P. Sumanasekara, H.M.K.D. Wickramathilaka, G.M.R.I. Godaliyadda, H.M.V.R. Herath, M.P.B. Ekanayake","doi":"10.1016/j.ophoto.2025.100086","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, transformer-based deep learning networks have gained popularity in Hyperspectral (HS) unmixing applications due to their superior performance. Most of these networks use an Endmember Extraction Algorithm(EEA) for the initialization of their network. As EEAs performance depends on the environment, single initialization does not ensure optimum performance. Also, only a few networks utilize the spatial context in HS Images to solve the unmixing problem. In this paper, we propose Hyperspectral Unmixing with Spatial Context and Endmember Ensemble Learning with Attention Mechanism (SCEELA) to address these issues. The proposed method has three main components, Signature Predictor (SP), Pixel Contextualizer (PC) and Abundance Predictor (AP). SP uses an ensemble of EEAs for each endmember as the initialization and the attention mechanism within the transformer enables ensemble learning to predict accurate endmembers. The attention mechanism in the PC enables the network to capture the contextual data and provide a more refined pixel to the AP to predict the abundance of that pixel. SCEELA was compared with eight state-of-the-art HS unmixing algorithms for three widely used real datasets and one synthetic dataset. The results show that the proposed method shows impressive performance when compared with other state-of-the-art algorithms.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100086"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Open Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667393225000055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, transformer-based deep learning networks have gained popularity in Hyperspectral (HS) unmixing applications due to their superior performance. Most of these networks use an Endmember Extraction Algorithm(EEA) for the initialization of their network. As EEAs performance depends on the environment, single initialization does not ensure optimum performance. Also, only a few networks utilize the spatial context in HS Images to solve the unmixing problem. In this paper, we propose Hyperspectral Unmixing with Spatial Context and Endmember Ensemble Learning with Attention Mechanism (SCEELA) to address these issues. The proposed method has three main components, Signature Predictor (SP), Pixel Contextualizer (PC) and Abundance Predictor (AP). SP uses an ensemble of EEAs for each endmember as the initialization and the attention mechanism within the transformer enables ensemble learning to predict accurate endmembers. The attention mechanism in the PC enables the network to capture the contextual data and provide a more refined pixel to the AP to predict the abundance of that pixel. SCEELA was compared with eight state-of-the-art HS unmixing algorithms for three widely used real datasets and one synthetic dataset. The results show that the proposed method shows impressive performance when compared with other state-of-the-art algorithms.