IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
Laser Protection via Jointly Learned Defocus and Image Reconstruction 基于联合学习离焦和图像重建的激光防护
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-07-10 DOI: 10.1109/TCI.2025.3587469
Johannes Meyer;Michael Henrichsen;Christian Eisele;Bastian Schwarz;Jürgen Limbach;Gunnar Ritt;Stefanie Dengler;Lukas Dippon;Christian Kludt
{"title":"Laser Protection via Jointly Learned Defocus and Image Reconstruction","authors":"Johannes Meyer;Michael Henrichsen;Christian Eisele;Bastian Schwarz;Jürgen Limbach;Gunnar Ritt;Stefanie Dengler;Lukas Dippon;Christian Kludt","doi":"10.1109/TCI.2025.3587469","DOIUrl":"https://doi.org/10.1109/TCI.2025.3587469","url":null,"abstract":"We propose a method to harden sensors against laser radiation by defocusing the employed optics on purpose, and to reconstruct the sought focused images of the scene via image reconstruction. The introduced defocus widens the laser spot incident on the sensor and greatly reduces its damage potential. We employ a coded aperture and optimize its pattern jointly with the free parameters of the image reconstruction pipeline. For the image reconstruction, we combine a state-of-the-art alternating direction method of multipliers (ADMM)-based physically informed deconvolution stage with a U-Net-like neural network to remove remaining reconstruction artifacts. To evaluate the performance of our proposed approach, we conducted reconstruction experiments on simulated data, including ablation experiments and on real data and performed sensor destruction tests with and without sensor protection. Destructive experiments with increasing laser power suggest that our approach has the potential to increase the tolerable radiation threshold by about three orders of magnitudes.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"967-979"},"PeriodicalIF":4.2,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11074739","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Preprocessing of Impulsive Motion Artifacts Using Low-Rank Matrix Recovery for Electrical Impedance Tomography 基于电阻抗断层成像低秩矩阵恢复的脉冲运动伪影鲁棒预处理
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-07-10 DOI: 10.1109/TCI.2025.3587458
Xiao-Peng Li;Zhang-Lei Shi;Meng Dai;Hing Cheung So;Inéz Frerichs;Zhanqi Zhao;Lin Yang
{"title":"Robust Preprocessing of Impulsive Motion Artifacts Using Low-Rank Matrix Recovery for Electrical Impedance Tomography","authors":"Xiao-Peng Li;Zhang-Lei Shi;Meng Dai;Hing Cheung So;Inéz Frerichs;Zhanqi Zhao;Lin Yang","doi":"10.1109/TCI.2025.3587458","DOIUrl":"https://doi.org/10.1109/TCI.2025.3587458","url":null,"abstract":"Electrical impedance tomography (EIT) is a valuable bedside tool in critical care medicine and pneumology. However, artifacts associated with body and electrode movements, especially impulsive motion artifacts, hinder its routine use in clinical scenarios. Most of the existing algorithms for EIT data preprocessing or imaging cannot effectively address this issue. In this paper, we propose a novel method, namely, robust preprocessing for EIT (RP4EIT), to preprocess EIT boundary voltages using the concept of low-rank matrix recovery. It aims to resist impulsive motion artifacts and further to enhance the imaging quality. To attain good performance on both the normal measurements and contaminated data, we design a two-stage denoising algorithm using robust statistical analysis and low-rank recovery. Specifically, EIT boundary voltages are first formulated as a matrix, where the rows and columns correspond to the channels and frames, respectively. Then, the entries corrupted by impulsive noise of the matrix are identified and considered as missing elements. Subsequently, RP4EIT exploits the low-rank property to restore the missing components. In doing so, the impulsive motion artifacts are eliminated from EIT measurements. Furthermore, the convergence guarantee of RP4EIT is established. Experimental results on phantom and patient data demonstrate that RP4EIT is able to remove the impulsive motion artifacts from boundary voltages and the recovered data yield high-quality EIT images.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"942-954"},"PeriodicalIF":4.2,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144671210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Test-Time Adaptation Improves Inverse Problem Solving With Patch-Based Diffusion Models 测试时间自适应改进了基于补丁的扩散模型的逆问题求解
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-07-10 DOI: 10.1109/TCI.2025.3587407
Jason Hu;Bowen Song;Jeffrey A. Fessler;Liyue Shen
{"title":"Test-Time Adaptation Improves Inverse Problem Solving With Patch-Based Diffusion Models","authors":"Jason Hu;Bowen Song;Jeffrey A. Fessler;Liyue Shen","doi":"10.1109/TCI.2025.3587407","DOIUrl":"https://doi.org/10.1109/TCI.2025.3587407","url":null,"abstract":"Diffusion models have achieved excellent success in solving inverse problems due to their ability to learn strong image priors, but existing approaches require a large training dataset of images that should come from the same distribution as the test dataset. In practice, the size of the available training dataset can range from nonexistent to very large. In some cases, conventional diffusion model training from limited data can lead to poor reconstruction results due to poorly learned priors. One potential improvement is to start with a diffusion model trained from available training data having a possibly mismatched distribution, and then refine the network at reconstruction time to account for the distribution mismatch. In this work, we investigate the effect of this network refining process on diffusion models trained from varying degrees of out-of-distribution data. Specifically, we use a self-supervised loss to adapt the learned diffusion network to the testing data while helping the network output maintain consistency with the measurements. We show that, both theoretically and experimentally, test-time adaptation of a patch-based diffusion prior leads to higher quality reconstructions than test-time refinement of traditional whole-image diffusion models. Extensive experiments show that across a wide range of inverse problems, test-time adaptation significantly improves image reconstruction quality when there are significant domain shifts between training and testing distributions. Interestingly, even for the in-distribution case, test-time adaptation also significantly improves reconstruction quality.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"980-991"},"PeriodicalIF":4.2,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144671172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAMMER: Direct Adaptive Multi-Resolution MEsh Reconstruction From X-Ray Measurements DAMMER:从x射线测量直接自适应多分辨率网格重建
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-07-10 DOI: 10.1109/TCI.2025.3587408
Jannes Merckx;Arnold J. den Dekker;Jan Sijbers;Jan De Beenhouwer
{"title":"DAMMER: Direct Adaptive Multi-Resolution MEsh Reconstruction From X-Ray Measurements","authors":"Jannes Merckx;Arnold J. den Dekker;Jan Sijbers;Jan De Beenhouwer","doi":"10.1109/TCI.2025.3587408","DOIUrl":"https://doi.org/10.1109/TCI.2025.3587408","url":null,"abstract":"X-ray computed tomography (XCT) reconstructs a scanned object using measured projection data, with the object typically represented on a voxel grid during the reconstruction process. However, since material interfaces typically do not align with voxel boundaries, a voxel representation inherently suffers from partial volume effects. This paper presents DAMMER: a method that reconstructs a multi-resolution triangle mesh to represent the attenuation values of a piecewise homogeneous object, often encountered in industrial CT, based on X-ray projection data of this object. DAMMER progressively reconstructs this mesh to match the object. For this, different homogeneous segments are created based on an agglomerative hierarchical clustering procedure, which targets a compact object description by optimizing a weighted sum of projection difference and number of edges between different segments. These segments are progressively optimized to match the homogeneous materials in the object. Simulation and real data experiments show that DAMMER generates significantly more accurate reconstructions compared to pixel grid reconstruction methods, outperforming conventional voxel-based methods in capturing the true geometry of complex material boundaries.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"926-941"},"PeriodicalIF":4.2,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144671281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pattern Research on Sparse Camera Array for Super-Resolution Imaging 稀疏相机阵列超分辨率成像方向图研究
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-07-10 DOI: 10.1109/TCI.2025.3588025
Tianren Li;Yu Zhang;Tengda Huang;Naishu Jia;Fulin Liu;Yufu Qu;Zhenzhong Wei
{"title":"Pattern Research on Sparse Camera Array for Super-Resolution Imaging","authors":"Tianren Li;Yu Zhang;Tengda Huang;Naishu Jia;Fulin Liu;Yufu Qu;Zhenzhong Wei","doi":"10.1109/TCI.2025.3588025","DOIUrl":"https://doi.org/10.1109/TCI.2025.3588025","url":null,"abstract":"This study addresses the quantitative pattern design of sparse camera arrays in multi-frame super-resolution (SR) imaging systems. We propose the Weighted Ratio of Repeated Spectra (WRRS) metric to predict the super-resolution performance of sparse camera arrays. The WRRS consists of the weight determined by the energy distribution of the imaging scene, and the ratio of repeated spectra intensity determined by array patterns. Our method achieves the optimization of sparse camera array pattern under certain scene without building an actual imaging system. The real image experiment shows that the WRRS is correlated with the median of the SR image quality distribution. Subsequently, a six-camera sparse camera array is designed for aircraft scene with varying object distance. Compared with current qualitatively and quantitatively designed patterns, the proposed sparse camera array pattern that specifically designed for aircraft achieves better average PSNR in the aircraft scene.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"992-1004"},"PeriodicalIF":4.2,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144695615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vignetting Correction Through Color-Intensity Map Entropy Optimization 基于色强图熵优化的渐晕校正
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-06-26 DOI: 10.1109/TCI.2025.3583465
Zhuang He;Hai-Miao Hu;Likun Gao;Haoxin Hu;Xinhui Xue;Zhenglin Tang;Difeng Zhu;Haowen Zheng;Chongze Wang
{"title":"Vignetting Correction Through Color-Intensity Map Entropy Optimization","authors":"Zhuang He;Hai-Miao Hu;Likun Gao;Haoxin Hu;Xinhui Xue;Zhenglin Tang;Difeng Zhu;Haowen Zheng;Chongze Wang","doi":"10.1109/TCI.2025.3583465","DOIUrl":"https://doi.org/10.1109/TCI.2025.3583465","url":null,"abstract":"Vignetting correction is an essential process of image signal processing. It is an important part for obtaining high-quality images, but the research in this field has not been fully emphasized. The mainstream methods are based on calibration which processes are complex. And many methods get low accuracy and poor robustness in practical. In this paper, we analyzed the optical principle of vignetting and its influence on the image. Then, we proposed an algorithm based on color-intensity map entropy optimization to correct image vignetting. Moreover, because of the lack of dataset of vignetting, we proposed a method for constructing vignetting image dataset through capturing the real scenes. Compared with the dataset generated through simulation, our dataset is more authentic and reliable. Many experiments have been carried out on this dataset, and the results proved that the proposed algorithm achieved the best performance.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"911-925"},"PeriodicalIF":4.2,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144623935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
G2L-Stereo: Global to Local Two-Stage Real-Time Stereo Matching Network G2L-Stereo:全局到局部两阶段实时立体匹配网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-06-19 DOI: 10.1109/TCI.2025.3581105
Jie Tang;Gaofeng Peng;Jialu Liu;Bo Yu
{"title":"G2L-Stereo: Global to Local Two-Stage Real-Time Stereo Matching Network","authors":"Jie Tang;Gaofeng Peng;Jialu Liu;Bo Yu","doi":"10.1109/TCI.2025.3581105","DOIUrl":"https://doi.org/10.1109/TCI.2025.3581105","url":null,"abstract":"Developing fast and accurate stereo matching algorithms is crucial for real-world embedded vision applications. Depth information plays a significant role in scene understanding, and depth calculated through stereo matching is generally considered to be more precise and reliable than that obtained from monocular depth estimation. However, speed-oriented stereo matching methods often suffer from poor feature representation due to sparse sampling and detail loss caused by unreasonable disparity allocation during upsampling. To address these issues, we propose G2L-Stereo, a two-stage real-time stereo matching network that combines global disparity range prediction and local disparity range prediction. In the global disparity range prediction stage, we introduce feature-guided connections for cost aggregation, enhancing the expressive power of sparse features by aligning the feature space across different scales of cost volumes. We also incorporate confidence estimation into the upsampling algorithm to reduce the propagation of inaccurate disparities during upsampling, yielding more precise disparity maps. In the local disparity range prediction stage, we develop a disparity refinement module guided by neighborhood similarity. This module aggregates similar neighboring costs to estimate disparity residuals and refine disparities, restoring lost details in the low-resolution disparity map and further enhancing disparity accuracy. Extensive experiments on the SceneFlow and KITTI datasets validate the effectiveness of our model, showing that G2L-Stereo achieves fast inference while maintaining accuracy comparable to state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"852-863"},"PeriodicalIF":4.2,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Coded Spectral CT Imaging Method Based on Projection Mix Separation 基于投影混合分离的能量编码光谱CT成像方法
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-06-11 DOI: 10.1109/TCI.2025.3578762
Xiaojie Zhao;Yihong Li;Yan Han;Ping Chen;Jiaotong Wei
{"title":"Energy-Coded Spectral CT Imaging Method Based on Projection Mix Separation","authors":"Xiaojie Zhao;Yihong Li;Yan Han;Ping Chen;Jiaotong Wei","doi":"10.1109/TCI.2025.3578762","DOIUrl":"https://doi.org/10.1109/TCI.2025.3578762","url":null,"abstract":"Spectral CT can be used to perform material decomposition from polychromatic attenuation data, generate virtual monochromatic or virtual narrow-energy-width images in which beam hardening artifacts are suppressed, and provide detailed energy attenuation coefficients for material characterization. We propose an energy-coded spectral CT imaging method that is based on projection mix separation, which enables simultaneous energy decoding and image reconstruction. An X-ray energy-coded forward model is then constructed. Leveraging the Poisson statistical properties of the measurement data, we formulate a constrained optimization problem for both the energy-coded coefficient matrix and the material decomposition coefficient matrix, which is solved using a block coordinate descent algorithm. Simulations and experimental results demonstrate that the decoded energy spectrum distribution and virtual narrow-energy-width CT images are accurate and effective. The proposed method suppresses beam hardening artifacts and enhances the material identification capabilities of traditional CT.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"839-851"},"PeriodicalIF":4.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iterative Collaboration Network Guided by Reconstruction Prior for Medical Image Super-Resolution 基于先验重构的医学图像超分辨率迭代协同网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-06-06 DOI: 10.1109/TCI.2025.3577340
Xiaoyan Kui;Zexin Ji;Beiji Zou;Yang Li;Yulan Dai;Liming Chen;Pierre Vera;Su Ruan
{"title":"Iterative Collaboration Network Guided by Reconstruction Prior for Medical Image Super-Resolution","authors":"Xiaoyan Kui;Zexin Ji;Beiji Zou;Yang Li;Yulan Dai;Liming Chen;Pierre Vera;Su Ruan","doi":"10.1109/TCI.2025.3577340","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577340","url":null,"abstract":"High-resolution medical images can provide more detailed information for better diagnosis. Conventional medical image super-resolution relies on a single task which first performs the extraction of the features and then upscaling based on the features. The features extracted may not be complete for super-resolution. Recent multi-task learning, including reconstruction and super-resolution, is a good solution to obtain additional relevant information. The interaction between the two tasks is often insufficient, which still leads to incomplete and less relevant deep features. To address above limitations, we propose an iterative collaboration network (ICONet) to improve communications between tasks by progressively incorporating reconstruction prior to the super-resolution learning procedure in an iterative collaboration way. It consists of a reconstruction branch, a super-resolution branch, and a SR-Rec fusion module. The reconstruction branch generates the artifact-free image as prior, which is followed by a super-resolution branch for prior knowledge-guided super-resolution. Unlike the widely-used convolutional neural networks for extracting local features and Transformers with quadratic computational complexity for modeling long-range dependencies, we develop a new residual spatial-channel feature learning (RSCFL) module of two branches to efficiently establish feature relationships in spatial and channel dimensions. Moreover, the designed SR-Rec fusion module fuses the reconstruction prior and super-resolution features with each other in an adaptive manner. Our ICONet is built with multi-stage models to iteratively upscale the low-resolution images using steps of <inline-formula> <tex-math>${2 times }$</tex-math></inline-formula> and simultaneously interact between two branches in multi-stage supervisions. Quantitative and qualitative experimental results on the benchmarking dataset show that our ICONet outperforms most state-of-the-art approaches.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"827-838"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144336075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
$L^{2}$FMamba: Lightweight Light Field Image Super-Resolution With State Space Model 轻量级光场图像超分辨率与状态空间模型
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-06-06 DOI: 10.1109/TCI.2025.3577338
Zeqiang Wei;Kai Jin;Zeyi Hou;Kuan Song;Xiuzhuang Zhou
{"title":"$L^{2}$FMamba: Lightweight Light Field Image Super-Resolution With State Space Model","authors":"Zeqiang Wei;Kai Jin;Zeyi Hou;Kuan Song;Xiuzhuang Zhou","doi":"10.1109/TCI.2025.3577338","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577338","url":null,"abstract":"Transformers bring significantly improved performance to the light field image super-resolution task due to their long-range dependency modeling capability. However, the inherently high computational complexity of their core self-attention mechanism has increasingly hindered their advancement in this task. To address this issue, we first introduce the LF-VSSM block, a novel module inspired by progressive feature extraction, to efficiently capture critical long-range spatial-angular dependencies in light field images. LF-VSSM successively extracts spatial features within sub-aperture images, spatial-angular features between sub-aperture images, and spatial-angular features between light field image pixels. On this basis, we propose a lightweight network, <inline-formula><tex-math>$L^{2}$</tex-math></inline-formula>FMamba (Lightweight Light Field Mamba), which integrates the LF-VSSM block to leverage light field features for super-resolution tasks while overcoming the computational challenges of Transformer-based approaches. Extensive experiments on multiple light field datasets demonstrate that our method reduces the number of parameters and complexity while achieving superior super-resolution performance with faster inference speed.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"816-826"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信