{"title":"用于高光谱图像超分辨率的新型空间和光谱变换器网络","authors":"Huapeng Wu, Hui Xu, Tianming Zhan","doi":"10.1007/s00530-024-01363-3","DOIUrl":null,"url":null,"abstract":"<p>Recently, transformer networks based on hyperspectral image super-resolution have achieved significant performance in comparison with most convolution neural networks. However, this is still an open problem of how to efficiently design a lightweight transformer structure to extract long-range spatial and spectral information from hyperspectral images. This paper proposes a novel spatial and spectral transformer network (SSTN) for hyperspectral image super-resolution. Specifically, the proposed transformer framework mainly consists of multiple consecutive alternating global attention layers and regional attention layers. In the global attention layer, a spatial and spectral self-attention module with less complexity is introduced to learn spatial and spectral global interaction, which can enhance the representation ability of the network. In addition, the proposed regional attention layer can extract regional feature information by using a window self-attention based on zero-padding strategy. This alternating architecture can adaptively learn regional and global feature information of hyperspectral images. Extensive experimental results demonstrate that the proposed method can achieve superior performance in comparison with the state-of-the-art hyperspectral image super-resolution methods.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"29 1","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel spatial and spectral transformer network for hyperspectral image super-resolution\",\"authors\":\"Huapeng Wu, Hui Xu, Tianming Zhan\",\"doi\":\"10.1007/s00530-024-01363-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Recently, transformer networks based on hyperspectral image super-resolution have achieved significant performance in comparison with most convolution neural networks. However, this is still an open problem of how to efficiently design a lightweight transformer structure to extract long-range spatial and spectral information from hyperspectral images. This paper proposes a novel spatial and spectral transformer network (SSTN) for hyperspectral image super-resolution. Specifically, the proposed transformer framework mainly consists of multiple consecutive alternating global attention layers and regional attention layers. In the global attention layer, a spatial and spectral self-attention module with less complexity is introduced to learn spatial and spectral global interaction, which can enhance the representation ability of the network. In addition, the proposed regional attention layer can extract regional feature information by using a window self-attention based on zero-padding strategy. This alternating architecture can adaptively learn regional and global feature information of hyperspectral images. Extensive experimental results demonstrate that the proposed method can achieve superior performance in comparison with the state-of-the-art hyperspectral image super-resolution methods.</p>\",\"PeriodicalId\":51138,\"journal\":{\"name\":\"Multimedia Systems\",\"volume\":\"29 1\",\"pages\":\"\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2024-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multimedia Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00530-024-01363-3\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multimedia Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00530-024-01363-3","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
A novel spatial and spectral transformer network for hyperspectral image super-resolution
Recently, transformer networks based on hyperspectral image super-resolution have achieved significant performance in comparison with most convolution neural networks. However, this is still an open problem of how to efficiently design a lightweight transformer structure to extract long-range spatial and spectral information from hyperspectral images. This paper proposes a novel spatial and spectral transformer network (SSTN) for hyperspectral image super-resolution. Specifically, the proposed transformer framework mainly consists of multiple consecutive alternating global attention layers and regional attention layers. In the global attention layer, a spatial and spectral self-attention module with less complexity is introduced to learn spatial and spectral global interaction, which can enhance the representation ability of the network. In addition, the proposed regional attention layer can extract regional feature information by using a window self-attention based on zero-padding strategy. This alternating architecture can adaptively learn regional and global feature information of hyperspectral images. Extensive experimental results demonstrate that the proposed method can achieve superior performance in comparison with the state-of-the-art hyperspectral image super-resolution methods.
期刊介绍:
This journal details innovative research ideas, emerging technologies, state-of-the-art methods and tools in all aspects of multimedia computing, communication, storage, and applications. It features theoretical, experimental, and survey articles.