IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
Data-Driven Discovery of Mechanical Models Directly From MRI Spectral Data 直接从核磁共振成像光谱数据中发现机械模型的数据驱动型方法
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-11-13 DOI: 10.1109/TCI.2024.3497775
David G. J. Heesterbeek;Max H. C. van Riel;Tristan van Leeuwen;Cornelis A. T. van den Berg;Alessandro Sbrizzi
{"title":"Data-Driven Discovery of Mechanical Models Directly From MRI Spectral Data","authors":"David G. J. Heesterbeek;Max H. C. van Riel;Tristan van Leeuwen;Cornelis A. T. van den Berg;Alessandro Sbrizzi","doi":"10.1109/TCI.2024.3497775","DOIUrl":"https://doi.org/10.1109/TCI.2024.3497775","url":null,"abstract":"Finding interpretable biomechanical models can provide insight into the functionality of organs with regard to physiology and disease. However, identifying broadly applicable dynamical models for in vivo tissue remains challenging. In this proof of concept study we propose a reconstruction framework for data-driven discovery of dynamical models from experimentally obtained undersampled MRI spectral data. The method makes use of the previously developed spectro-dynamic framework which allows for reconstruction of displacement fields at high spatial and temporal resolution required for model identification. The proposed framework combines this method with data-driven discovery of interpretable models using Sparse Identification of Non-linear Dynamics (SINDy). The design of the reconstruction algorithm is such that a symbiotic relation between the reconstruction of the displacement fields and the model identification is created. Our method does not rely on periodicity of the motion. It is successfully validated using spectral data of a dynamic phantom gathered on a clinical MRI scanner. The dynamic phantom is programmed to perform motion adhering to 5 different (non-linear) ordinary differential equations. The proposed framework performed better than a 2-step approach where the displacement fields were first reconstructed from the undersampled data without any information on the model, followed by data-driven discovery of the model using the reconstructed displacement fields. This study serves as a first step in the direction of data-driven discovery of in vivo models.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1640-1649"},"PeriodicalIF":4.2,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142736529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INF3: Implicit Neural Feature Fusion Function for Multispectral and Hyperspectral Image Fusion INF3:用于多光谱和高光谱图像融合的隐式神经特征融合函数
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-11-11 DOI: 10.1109/TCI.2024.3488569
Ruo-Cheng Wu;Shangqi Deng;Ran Ran;Hong-Xia Dou;Liang-Jian Deng
{"title":"INF3: Implicit Neural Feature Fusion Function for Multispectral and Hyperspectral Image Fusion","authors":"Ruo-Cheng Wu;Shangqi Deng;Ran Ran;Hong-Xia Dou;Liang-Jian Deng","doi":"10.1109/TCI.2024.3488569","DOIUrl":"https://doi.org/10.1109/TCI.2024.3488569","url":null,"abstract":"Multispectral and Hyperspectral Image Fusion (MHIF) is a task that aims to fuse a high-resolution multispectral image (HR-MSI) and a low-resolution hyperspectral image (LR-HSI) acquired on the same scene to obtain a high-resolution hyperspectral image (HR-HSI). Benefiting from the powerful inductive bias capability, convolutional neural network (CNN) based methods have achieved great success for the MHIF task. However, they lack flexibility when processing multi-scale images and require convolution structures be stacked to enhance performance. Implicit neural representation (INR) has recently achieved good performance and interpretability in 2D processing tasks thanks to its ability to locally interpolate samples and utilize multimodal content, such as pixels and coordinates. Although INR-based approaches show promising results, they put additional demands on high-frequency information (e.g., positional encoding). In this paper, we propose the use of the HR-MSI as high-frequency detail auxiliary input, thus introducing a new INR-based hyperspectral fusion function called implicit neural feature fusion function (INF\u0000<sup>3</sup>\u0000). The method overcomes the inherent shortcomings of vanilla INR thereby solving the MHIF problem. Specifically, our INF\u0000<sup>3</sup>\u0000 designs a dual high-frequency fusion (DHFF) structure that obtains high-frequency information from HR-MSI and LR-HSI fusing them with coordinate information. Moreover, the proposed INF\u0000<sup>3</sup>\u0000 incorporates a parameter-free method called INR with cosine similarity (INR-CS) that uses cosine similarity to generate local weights through feature vectors. Relied upon INF\u0000<sup>3</sup>\u0000, we build an implicit neural fusion network (INFN) that achieves state-of-the-art performance for the MHIF task on two public datasets, i.e., CAVE and Harvard. It also reaches the advanced level on the Pansharpening task, proving the flexibility of the proposed approach.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1547-1558"},"PeriodicalIF":4.2,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recurrent Cross-Modality Fusion for Time-of-Flight Depth Denoising 用于飞行时间深度去噪的递归跨模态融合技术
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-11-11 DOI: 10.1109/TCI.2024.3496312
Guanting Dong;Yueyi Zhang;Xiaoyan Sun;Zhiwei Xiong
{"title":"Recurrent Cross-Modality Fusion for Time-of-Flight Depth Denoising","authors":"Guanting Dong;Yueyi Zhang;Xiaoyan Sun;Zhiwei Xiong","doi":"10.1109/TCI.2024.3496312","DOIUrl":"https://doi.org/10.1109/TCI.2024.3496312","url":null,"abstract":"The widespread use of Time-of-Flight (ToF) depth cameras in academia and industry is limited by noise, such as Multi-Path-Interference (MPI) and shot noise, which hampers their ability to produce high-quality depth images. Learning-based ToF denoising methods currently in existence often face challenges in delivering satisfactory performance in complex scenes. This is primarily attributed to the impact of multiple reflected signals on the formation of MPI, rendering it challenging to predict MPI directly through spatially-varying convolutions. To address this limitation, we adopt a recurrent architecture that exploits the prior that MPI is decomposable into an additive combination of the geometric information for the neighboring pixels. Our approach employs a Gated Recurrent Unit (GRU) based network to estimate a long-distance aggregation process, simplifying the MPI removal and updating depth correction over multiple steps. Additionally, we introduce a global restoration module and a local update module to fuse depth and amplitude features, which improves denoising performance and prevents structural distortions. Experimental results on both synthetic and real-world datasets demonstrate the superiority of our approach over state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1626-1639"},"PeriodicalIF":4.2,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equilibrium Model With Anisotropy for Model-Based Reconstruction in Magnetic Particle Imaging 磁粒子成像中基于模型重建的各向异性平衡模型
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-11-01 DOI: 10.1109/TCI.2024.3490381
Marco Maass;Tobias Kluth;Christine Droigk;Hannes Albers;Konrad Scheffler;Alfred Mertins;Tobias Knopp
{"title":"Equilibrium Model With Anisotropy for Model-Based Reconstruction in Magnetic Particle Imaging","authors":"Marco Maass;Tobias Kluth;Christine Droigk;Hannes Albers;Konrad Scheffler;Alfred Mertins;Tobias Knopp","doi":"10.1109/TCI.2024.3490381","DOIUrl":"https://doi.org/10.1109/TCI.2024.3490381","url":null,"abstract":"Magnetic particle imaging is a tracer-based tomographic imaging technique that allows the concentration of magnetic nanoparticles to be determined with high spatio-temporal resolution. To reconstruct an image of the tracer concentration, the magnetization dynamics of the particles must be accurately modeled. A popular ensemble model is based on solving the Fokker-Plank equation, taking into account either Brownian or Néel dynamics. The disadvantage of this model is that it is computationally expensive due to an underlying stiff differential equation. A simplified model is the equilibrium model, which can be evaluated directly but in most relevant cases it suffers from a non-negligible modeling error. In the present work, we investigate an extended version of the equilibrium model that can account for particle anisotropy. We show that this model can be expressed as a series of Bessel functions, which can be truncated based on a predefined accuracy, leading to very short computation times, which are about three orders of magnitude lower than equivalent Fokker-Planck computation times. We investigate the accuracy of the model for 2D Lissajous magnetic particle imaging sequences and show that the difference between the Fokker-Planck and the equilibrium model with anisotropy is sufficiently small so that the latter model can be used for image reconstruction on experimental data with only marginal loss of image quality, even compared to a system matrix-based reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1588-1601"},"PeriodicalIF":4.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10740915","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
F2IFlow for CT Metal Artifact Reduction 用于减少 CT 金属伪影的 F2IFlow
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-11-01 DOI: 10.1109/TCI.2024.3485538
Jiandong Su;Ce Wang;Yinsheng Li;Dong Liang;Kun Shang
{"title":"F2IFlow for CT Metal Artifact Reduction","authors":"Jiandong Su;Ce Wang;Yinsheng Li;Dong Liang;Kun Shang","doi":"10.1109/TCI.2024.3485538","DOIUrl":"https://doi.org/10.1109/TCI.2024.3485538","url":null,"abstract":"Computed Tomography (CT) has been extensively utilized for medical diagnosis, assessment, as well as treatment planning and guidance. However, the image quality will be significantly compromised when metallic implants are present in patients' bodies, consequently affecting the clinical diagnosis or radiation therapy dose calculation. Previous Metal Artifact Reduction (MAR) methods either require prior knowledge about metallic implants or exhibit modeling bias in the mechanism of artifact formation, which restricts the capability to acquire high-quality CT images and increases the complexity of practical applications. In this paper, we propose a novel MAR method based on a feature-to-image conditional normalization flow, named F2IFlow, to address the problem. Specifically, we initially design an inherent feature extraction to get the inherent anatomical features of CT images. Then, a feature-to-image flow module is used for completing the metal-artifact-free CT images progressively through a series of reversible transformations. Incorporating these designs into F2IFlow, the coarse-to-fine strategy equips our model with the capability to deliver exceptional performance. Experimental results on both simulated and clinical datasets demonstrate that our method achieves superior performance in both quantitative and qualitative outcomes, exhibiting better visual effects in terms of artifact reduction and image fidelity.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1533-1546"},"PeriodicalIF":4.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142595134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Polar Format Algorithm for Non-Planar Target Imaging With DSM 利用 DSM 进行非平面目标成像的扩展极点格式算法
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-10-31 DOI: 10.1109/TCI.2024.3490382
Jingwei Chen;Daoxiang An;Dong Feng;Wu Wang;Zhimin Zhou
{"title":"Extended Polar Format Algorithm for Non-Planar Target Imaging With DSM","authors":"Jingwei Chen;Daoxiang An;Dong Feng;Wu Wang;Zhimin Zhou","doi":"10.1109/TCI.2024.3490382","DOIUrl":"https://doi.org/10.1109/TCI.2024.3490382","url":null,"abstract":"In case of circular or non-linear acquisition trajectory, synthetic aperture radar (SAR) focusing becomes increasingly sensitive to elevation. For non-planar target imaging, it not only appears fore-shortening but also blurred. As the wider integration angle and higher elevation of objects, the defocus cannot be ignored. Generally, the polar format algorithm (PFA) is an efficient imaging algorithm for circular or non-linear SAR. However, in the process of PFA, the impact of focusing at an incorrect altitude has not been considered. In this article, the conventional PFA is adapted to incorporate the known digital surface model (DSM) into the imaging process. Firstly, the maximum allowable elevation deviation (MAED) \u0000<inline-formula><tex-math>$delta {{z}_{max }}$</tex-math></inline-formula>\u0000 is derived. Secondly, for non-planar targets that are higher than \u0000<inline-formula><tex-math>$delta {{z}_{max }}$</tex-math></inline-formula>\u0000, data extraction is applied in the range-Doppler domain. Additionally, a compensation function is multiplied, which is constructed based on DSM data separately. The corresponding original echo data is then replaced with the processed data. The whole method only involves fast Fourier transform (FFT) and complex multiplication which enhances operational efficiency. The simulated and experimental data results demonstrated the effectiveness and practicability of the proposed algorithm.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1602-1615"},"PeriodicalIF":4.2,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OccCasNet: Occlusion-Aware Cascade Cost Volume for Light Field Depth Estimation OccCasNet:光场深度估计的光场感知级联成本体积
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-10-30 DOI: 10.1109/TCI.2024.3488563
Wentao Chao;Fuqing Duan;Xuechun Wang;Yingqian Wang;Ke Lu;Guanghui Wang
{"title":"OccCasNet: Occlusion-Aware Cascade Cost Volume for Light Field Depth Estimation","authors":"Wentao Chao;Fuqing Duan;Xuechun Wang;Yingqian Wang;Ke Lu;Guanghui Wang","doi":"10.1109/TCI.2024.3488563","DOIUrl":"https://doi.org/10.1109/TCI.2024.3488563","url":null,"abstract":"Depth estimation using the Light Field (LF) technique is an essential task with a wide range of practical applications. While mainstream approaches based on multi-view stereo techniques can attain exceptional accuracy by creating finer cost volumes, they are resource-intensive, time-consuming, and often overlook occlusion during cost volume construction. To address these issues and strike a better balance between accuracy and efficiency, we propose an occlusion-aware cascade cost volume for LF depth (disparity) estimation. Our cascaded strategy reduces the sampling number while maintaining a constant sampling interval, enabling the construction of a finer cost volume. We also introduce occlusion maps to enhance accuracy in constructing the occlusion-aware cost volume. Specifically, we first generate a coarse disparity map through a coarse disparity estimation network. Then, we warp the sub-aperture images (SAIs) of adjacent views to the center view based on the coarse disparity map to generate occlusion maps for each SAI by photo-consistency constraints. Finally, we seamlessly incorporate occlusion maps into cascade cost volume to construct an occlusion-aware refined cost volume, allowing the refined disparity estimation network to yield a more precise disparity map. Extensive experiments demonstrate the effectiveness of our method. Compared with the state-of-the-art techniques, our method achieves a superior balance between accuracy and efficiency, ranking first in the Q25 metric on the HCI 4D benchmark.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1680-1691"},"PeriodicalIF":4.2,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142761430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GraphEIT: Unsupervised Graph Neural Networks for Electrical Impedance Tomography GraphEIT:用于电阻抗断层扫描的无监督图神经网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-10-23 DOI: 10.1109/TCI.2024.3485517
Zixin Liu;Junwu Wang;Qianxue Shan;Dong Liu
{"title":"GraphEIT: Unsupervised Graph Neural Networks for Electrical Impedance Tomography","authors":"Zixin Liu;Junwu Wang;Qianxue Shan;Dong Liu","doi":"10.1109/TCI.2024.3485517","DOIUrl":"https://doi.org/10.1109/TCI.2024.3485517","url":null,"abstract":"Convolutional Neural Networks (CNNs) based methodologies have found extensive application in Electrical Impedance Tomography (EIT). Convolution is commonly employed for uniform domains like pixel or voxel images. However, EIT reconstruction problem often involves nonuniform meshes, typically arising from finite element methods. Hence, reconciling nonuniform and uniform domains is essential. To address this issue, we propose an unsupervised reconstruction approach, termed GraphEIT, designed to tackle EIT problems directly on nonuniform mesh domains. The core concept revolves around representing conductivity via a fusion model that seamlessly integrates Graph Neural Networks (GNNs) and Multi-layer Perceptron networks (MLPs). Operating in an unsupervised manner eliminates the requirement for labeled data. Additionally, we incorporate Fourier feature projection to counter neural network spectral bias, thereby guiding the network to capture high-frequency details. Comprehensive experiments demonstrate the effectiveness of our proposed method, showcasing notable improvements in sharpness and shape preservation. Comparative analyses against state-of-the-art techniques underscore its superior convergence capability and robustness, particularly in the presence of measurement noise.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1559-1570"},"PeriodicalIF":4.2,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fixed-Point Convergence of Multi-Block PnP ADMM and Its Application to Hyperspectral Image Restoration 多块 PnP ADMM 的定点收敛及其在高光谱图像复原中的应用
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-10-23 DOI: 10.1109/TCI.2024.3485467
Weijie Liang;Zhihui Tu;Jian Lu;Kai Tu;Michael K. Ng;Chen Xu
{"title":"Fixed-Point Convergence of Multi-Block PnP ADMM and Its Application to Hyperspectral Image Restoration","authors":"Weijie Liang;Zhihui Tu;Jian Lu;Kai Tu;Michael K. Ng;Chen Xu","doi":"10.1109/TCI.2024.3485467","DOIUrl":"https://doi.org/10.1109/TCI.2024.3485467","url":null,"abstract":"Coupling methods of integrating multiple priors have emerged as a pivotal research focus in hyperspectral image (HSI) restoration. Among these methods, the Plug-and-Play (PnP) framework stands out and pioneers a novel coupling approach, enabling flexible integration of diverse methods into model-based approaches. However, the current convergence analyses of the PnP framework are highly unexplored, as they are limited to 2-block composite optimization problems, failing to meet the need of coupling modeling for incorporating multiple priors. This paper focuses on the convergence analysis of PnP-based algorithms for multi-block composite optimization problems. In this work, under the PnP framework and utilizing the alternating direction method of multipliers (ADMM) of the continuation scheme, we propose a unified multi-block PnP ADMM algorithm framework for HSI restoration. Inspired by the fixed-point convergence theory of the 2-block PnP ADMM, we establish a similar fixed-point convergence guarantee for the multi-block PnP ADMM with extended condition and provide a feasible parameter tuning methodology. Based on this framework, we design an effective mixed noise removal algorithm incorporating global, nonlocal and deep priors. Extensive experiments validate the algorithm's superiority and competitiveness.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1571-1587"},"PeriodicalIF":4.2,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CalibFPA: A Focal Plane Array Imaging System Based on Online Deep-Learning Calibration CalibFPA:基于在线深度学习校准的焦平面阵列成像系统
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-10-16 DOI: 10.1109/TCI.2024.3477312
Alper Güngör;M. Umut Bahceci;Yasin Ergen;Ahmet Sözak;O. Oner Ekiz;Tolga Yelboga;Tolga Çukur
{"title":"CalibFPA: A Focal Plane Array Imaging System Based on Online Deep-Learning Calibration","authors":"Alper Güngör;M. Umut Bahceci;Yasin Ergen;Ahmet Sözak;O. Oner Ekiz;Tolga Yelboga;Tolga Çukur","doi":"10.1109/TCI.2024.3477312","DOIUrl":"https://doi.org/10.1109/TCI.2024.3477312","url":null,"abstract":"Compressive focal plane arrays (FPA) enable cost-effective high-resolution (HR) imaging by acquisition of several multiplexed measurements on a low-resolution (LR) sensor. Multiplexed encoding of the visual scene is often attained via electronically controllable spatial light modulators (SLM). To capture system non-idealities such as optical aberrations, a system matrix is measured via additional offline scans, where the system response is recorded for a point source at each spatial location on the imaging grid. An HR image can then be reconstructed by solving an inverse problem that involves encoded measurements and the calibration matrix. However, this offline calibration framework faces limitations due to challenges in encoding single HR grid locations with a fixed coded aperture, lengthy calibration scans repeated to account for system drifts, and computational burden of reconstructions based on dense system matrices. Here, we propose a novel compressive FPA system based on online deep-learning calibration of multiplexed LR measurements (CalibFPA). To acquire multiplexed measurements, we devise an optical setup where a piezo-stage locomotes a pre-printed fixed coded aperture. We introduce a physics-driven deep-learning method to correct for the influences of optical aberrations in multiplexed measurements without the need for offline calibration scans. The corrected measurement matrix is of block-diagonal form, so it can be processed efficiently to recover HR images with a user-preferred reconstruction algorithm including least-squares, plug-and-play, or unrolled techniques. On simulated and experimental datasets, we demonstrate that CalibFPA outperforms state-of-the-art compressive FPA methods. We also report analyses to validate the design elements in CalibFPA and assess computational complexity.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1650-1663"},"PeriodicalIF":4.2,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142736588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信