High-Generalized Unfolding Model With Coupled Spatial-Spectral Transformer for Hyperspectral Image Reconstruction

IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Xian-Hua Han
{"title":"High-Generalized Unfolding Model With Coupled Spatial-Spectral Transformer for Hyperspectral Image Reconstruction","authors":"Xian-Hua Han","doi":"10.1109/TCI.2025.3564776","DOIUrl":null,"url":null,"abstract":"Deep unfolding framework has witnessed remarkable progress for hyperspectral image (HSI) reconstruction benefitting from advanced consolidation of the imaging model-driven and data-driven approaches, which are generally realized with the data reconstruction error term and the prior learning network. However, current methods still encounter challenges related to insufficient generalization and representation for the high-dimensional HSI data, manifesting in two key aspects: 1) assumption of the fixed sensing mask causing low generalization for reconstruction of the compressive measurements out of distribution; 2) imperfect prior representation network for the high-dimensional data in both spatial and spectral domains. To overcome the aforementioned issues, this study presents a high-generalized deep unfolding model using coupled spatial-spectral transformer (CS2Tr) for prior learning. Specifically, to improve the generalization capability, we synthesize the training samples with diverse masks to learn the unfolding model, and propose a mask guided-data modeling module for being incorporated with both data reconstruction term and prior learning network for degradation-aware updating and representation context modeling. To achieve robust prior representation, a coupled spatial-spectral transformer aiming at modeling both non-local spatial and spectral dependencies is introduced for capturing the 3D attributes of HSI. Moreover, we conduct the feature interaction among stages to capture rich and diverse contexts, and leverage the auxiliary losses on all stages for enhancing the recovery capability of each individual step. Extensive experiments on both simulated and real scenes have demonstrated that our proposed method outperforms the state-of-the-art HSI reconstruction approaches.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"625-637"},"PeriodicalIF":4.2000,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Imaging","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10978052/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Deep unfolding framework has witnessed remarkable progress for hyperspectral image (HSI) reconstruction benefitting from advanced consolidation of the imaging model-driven and data-driven approaches, which are generally realized with the data reconstruction error term and the prior learning network. However, current methods still encounter challenges related to insufficient generalization and representation for the high-dimensional HSI data, manifesting in two key aspects: 1) assumption of the fixed sensing mask causing low generalization for reconstruction of the compressive measurements out of distribution; 2) imperfect prior representation network for the high-dimensional data in both spatial and spectral domains. To overcome the aforementioned issues, this study presents a high-generalized deep unfolding model using coupled spatial-spectral transformer (CS2Tr) for prior learning. Specifically, to improve the generalization capability, we synthesize the training samples with diverse masks to learn the unfolding model, and propose a mask guided-data modeling module for being incorporated with both data reconstruction term and prior learning network for degradation-aware updating and representation context modeling. To achieve robust prior representation, a coupled spatial-spectral transformer aiming at modeling both non-local spatial and spectral dependencies is introduced for capturing the 3D attributes of HSI. Moreover, we conduct the feature interaction among stages to capture rich and diverse contexts, and leverage the auxiliary losses on all stages for enhancing the recovery capability of each individual step. Extensive experiments on both simulated and real scenes have demonstrated that our proposed method outperforms the state-of-the-art HSI reconstruction approaches.
基于空间-光谱耦合变换的高光谱图像重构高广义展开模型
深度展开框架在高光谱图像重构方面取得了显著进展,这得益于成像模型驱动和数据驱动两种方法的深度整合,而这两种方法通常是通过数据重构误差项和先验学习网络来实现的。然而,目前的方法仍然面临着对高维HSI数据泛化和表征不足的挑战,主要表现在两个方面:1)假设固定的感知掩模,导致非分布压缩测量重构的泛化程度较低;2)空间域和谱域高维数据的不完善先验表示网络。为了克服上述问题,本研究提出了一种使用耦合空间-频谱转换器(CS2Tr)进行先验学习的高广义深度展开模型。具体而言,为了提高泛化能力,我们综合了具有不同掩模的训练样本来学习展开模型,并提出了一个掩模引导数据建模模块,该模块将数据重构项和先验学习网络结合起来,用于退化感知更新和表示上下文建模。为了实现鲁棒的先验表示,引入了一种耦合的空间-光谱转换器,旨在对非局部空间和光谱依赖关系进行建模,以捕获HSI的三维属性。此外,我们通过阶段之间的特征交互来捕获丰富多样的上下文,并利用所有阶段的辅助损失来增强每个步骤的恢复能力。模拟和真实场景的大量实验表明,我们提出的方法优于最先进的HSI重建方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Computational Imaging
IEEE Transactions on Computational Imaging Mathematics-Computational Mathematics
CiteScore
8.20
自引率
7.40%
发文量
59
期刊介绍: The IEEE Transactions on Computational Imaging will publish articles where computation plays an integral role in the image formation process. Papers will cover all areas of computational imaging ranging from fundamental theoretical methods to the latest innovative computational imaging system designs. Topics of interest will include advanced algorithms and mathematical techniques, model-based data inversion, methods for image and signal recovery from sparse and incomplete data, techniques for non-traditional sensing of image data, methods for dynamic information acquisition and extraction from imaging sensors, software and hardware for efficient computation in imaging systems, and highly novel imaging system design.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信