{"title":"Estimation-Denoising Integration Network Architecture With Updated Parameter for MRI Reconstruction","authors":"Tingting Wu;Simiao Liu;Hao Zhang;Tieyong Zeng","doi":"10.1109/TCI.2025.3531729","DOIUrl":"https://doi.org/10.1109/TCI.2025.3531729","url":null,"abstract":"In recent years, plug-and-play (PnP) approaches have emerged as an appealing strategy for recovering magnetic resonance imaging. Compared with traditional compressed sensing methods, these approaches can leverage innovative denoisers to exploit the richer structure of medical images. However, most state-of-the-art networks are not able to adaptively remove noise at each level. To solve this problem, we propose a joint denoising network based on PnP trained to evaluate the noise distribution, realizing efficient, flexible, and accurate reconstruction. The ability of the first subnetwork to estimate complex distributions is utilized to implicitly learn noisy features, effectively tackling the difficulty of precisely delineating the obscure noise law. The second subnetwork builds on the first network and can denoise and reconstruct the image after obtaining the noise distribution. Precisely, the hyperparameter is dynamically adjusted to regulate the denoising level throughout each iteration, ensuring the convergence of our model. This step can gradually remove the image noise and use previous knowledge extracted from the frequency domain to enhance spatial particulars simultaneously. The experimental results significantly improve quantitative metrics and visual performance on different datasets.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"142-153"},"PeriodicalIF":4.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Denoising Knowledge Transfer Model for Zero-Shot MRI Reconstruction","authors":"Ruizhi Hou;Fang Li","doi":"10.1109/TCI.2025.3525960","DOIUrl":"https://doi.org/10.1109/TCI.2025.3525960","url":null,"abstract":"Though fully-supervised deep learning methods have made remarkable achievements in accelerated magnetic resonance imaging (MRI) reconstruction, the fully-sampled or high-quality data is unavailable in many scenarios. Zero-shot learning enables training on under-sampled data. However, the limited information in under-sampled data inhibits the neural network from realizing its full potential. This paper proposes a novel learning framework to enhance the diversity of the learned prior in zero-shot learning and improve the reconstruction quality. It consists of three stages: multi-weighted zero-shot ensemble learning, denoising knowledge transfer, and model-guided reconstruction. In the first stage, the ensemble models are trained using a multi-weighted loss function in k-space, yielding results with higher quality and diversity. In the second stage, we propose to use the deep denoiser to distill the knowledge in the ensemble models. Additionally, the denoiser is initialized using weights pre-trained on nature images, combining external knowledge with the information from under-sampled data. In the third stage, the denoiser is plugged into the iteration algorithm to produce the final reconstructed image. Extensive experiments demonstrate that our proposed framework surpasses existing zero-shot methods and can flexibly adapt to different datasets. In multi-coil reconstruction, our proposed zero-shot learning framework outperforms the state-of-the-art denoising-based methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"52-64"},"PeriodicalIF":4.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Scene Reconstruction for Color Spike Camera via Zero-Shot Learning","authors":"Yanchen Dong;Ruiqin Xiong;Xiaopeng Fan;Shuyuan Zhu;Jin Wang;Tiejun Huang","doi":"10.1109/TCI.2025.3527156","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527156","url":null,"abstract":"As a neuromorphic vision sensor with ultra-high temporal resolution, spike camera shows great potential in high-speed imaging. To capture color information of dynamic scenes, color spike camera (CSC) has been invented with a Bayer-pattern color filter array (CFA) on the sensor. Some spike camera reconstruction methods try to train end-to-end models by massive synthetic data pairs. However, there are gaps between synthetic and real-world captured data. The distribution of training data impacts model generalizability. In this paper, we propose a zero-shot learning-based method for CSC reconstruction to restore color images from a Bayer-pattern spike stream without pre-training. As the Bayer-pattern spike stream consists of binary signal arrays with missing pixels, we propose to leverage temporally neighboring spike signals of frame, pixel and interval levels to restore color channels. In particular, we employ a zero-shot learning-based scheme to iteratively refine the output via temporally neighboring spike stream clips. To generate high-quality pseudo-labels, we propose to exploit temporally neighboring pixels along the motion direction to estimate the missing pixels. Besides, a temporally neighboring spike interval-based representation is developed to extract temporal and color features from the binary Bayer-pattern spike stream. Experimental results on real-world captured data demonstrate that our method can restore color images with better visual quality than compared methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"129-141"},"PeriodicalIF":4.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High-Throughput Decomposition-Inspired Deep Unfolding Network for Image Compressed Sensing","authors":"Tiancheng Li;Qiurong Yan;Yi Li;Jinwei Yan","doi":"10.1109/TCI.2025.3527880","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527880","url":null,"abstract":"Deep Unfolding Network (DUN) has achieved great success in the image Compressed Sensing (CS) field benefiting from its great interpretability and performance. However, existing DUNs suffer from limited information transmission capacity with increasingly complex structures, leading to undesirable results. Besides, current DUNs are mostly established based on one specific optimization algorithm, which hampers the development and understanding of DUN. In this paper, we propose a new unfolding formula combining the Approximate Message Passing algorithm (AMP) and Range-Nullspace Decomposition (RND), which offers new insights for DUN design. To maximize information transmission and utilization, we propose a novel High-Throughput Decomposition-Inspired Deep Unfolding Network (HTDIDUN) based on the new formula. Specifically, we design a powerful Nullspace Information Extractor (NIE) with high-throughput transmission and stacked residual channel attention blocks. By modulating the dimension of the feature space, we provide three implementations from small to large. Extensive experiments on natural and medical images manifest that our HTDIDUN family members outperform other state-of-the-art methods by a large margin. Our codes and pre-trained models are available on GitHub to facilitate further exploration.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"89-100"},"PeriodicalIF":4.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast and Robust Phase Retrieval via Deep Expectation-Consistent Approximation","authors":"Saurav K. Shastri;Philip Schniter","doi":"10.1109/TCI.2025.3527140","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527140","url":null,"abstract":"Accurately recovering images from phaseless measurements is a challenging and long-standing problem. In this work, we present “deepECpr,” which combines expectation-consistent (EC) approximation with deep denoising networks to surpass state-of-the-art phase-retrieval methods in both speed and accuracy. In addition to applying EC in a non-traditional manner, deepECpr includes a novel stochastic damping scheme that is inspired by recent diffusion methods. Like existing phase-retrieval methods based on plug-and-play priors, regularization by denoising, or diffusion, deepECpr iterates a denoising stage with a measurement-exploitation stage. But unlike existing methods, deepECpr requires far fewer denoiser calls. We compare deepECpr to the state-of-the-art prDeep (Metzler et al., 2018), Deep-ITA (Wang et al., 2020), DOLPH (Shoushtari et al., 2023), and Diffusion Posterior Sampling (Chung et al., 2023) methods for noisy phase-retrieval of color, natural, and unnatural grayscale images on oversampled-Fourier and coded-diffraction-pattern measurements and find improvements in both PSNR and SSIM with significantly fewer denoiser calls.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"116-128"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guided Depth Inpainting in ToF Image Sensing Based on Near Infrared Information","authors":"Amina Achaibou;Filiberto Pla;Javier Calpe","doi":"10.1109/TCI.2025.3527159","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527159","url":null,"abstract":"Accurate depth estimation is crucial in various computer vision applications, such as robotics, augmented reality, or autonomous driving. Despite the common use of Time-of-Flight (ToF) sensing systems, they still face challenges such as invalid pixels and missing depth values, particularly with low light reflectance, distant objects, or light-saturated conditions. Cameras using indirect ToF technology provide depth maps along with active infrared brightness images, which can offer a potential guide for depth restoration in fusion approaches. This study proposes a method for depth completion by combining depth and active infrared images in ToF systems. The approach is based on a belief propagation strategy to extend valid nearby information in missing depth regions, using the infrared gradient for depth consistency. Emphasis is placed on considering object edges, especially those coinciding with depth discontinuities, to approximate missing values. Empirical results demonstrate the efficiency and simplicity of the proposed algorithm, showcasing superior outcomes compared to other reference guided depth inpainting methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"154-169"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Signal Processing Society Information","authors":"","doi":"10.1109/TCI.2024.3525385","DOIUrl":"https://doi.org/10.1109/TCI.2024.3525385","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"C2-C2"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10833177","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"APANet: Asymmetrical Parallax Attention Network for Efficient Stereo Image Deraining","authors":"Chenglong Wang;Tao Yan;Weilong Huang;Xianglong Chen;Ke Xu;Xiaojun Chang","doi":"10.1109/TCI.2025.3527142","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527142","url":null,"abstract":"Recently, several stereo image deraining methods have been proposed to recover clean backgrounds from rainy stereo images by exploring and exploiting intra and inter-view information. Despite these methods have achieved great progress, they under-utilize the parallax information of input images, and do not take advantage of existing high-quality and abundant single image rainy datasets for learning. In this paper, we propose an effective and efficient network, named Asymmetrical Parallax Attention Network (APANet), for stereo image deraining. Specifically, to fully exploit the parallax information, we first adopt an External Attention Module (EAM), which consists of an external attention block with two learnable memories, and a gated feed-forward network, for achieving a better feature representation by incorporating the correlations between all samples. Subsequently, we propose an Asymmetrical Parallax Attention Module (APAM) to efficiently exploit the cross-attention between the features separately extracted from the left and right views, which filters useless stereo feature relationships with a well-designed mask calculated by excavating the parallax information (positional information of each matched pixel pair within a stereo image). For learning our network, we also construct an unpaired real-world stereo rainy image dataset, called StereoRealRain, which consists of some video clips (including 11803 image pairs). Moreover, we also introduce a Single-to-Stereo Image Deraining Distillation strategy for transferring the knowledge learned from single images deraining to stereo images deraining to improve the generalization ability of our network. Extensive experiments conducted on synthetic and real-world stereo rainy datasets demonstrate the effectiveness of our method.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"101-115"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuangchen Li;Lizhi Zhang;Hongbo Guo;Jintao Li;Jingjing Yu;Xuelei He;Yizhe Zhao;Xiaowei He
{"title":"CSA-FCN: Channel- and Spatial-Gated Attention Mechanism Based Fully Complex-Valued Neural Network for System Matrix Calibration in Magnetic Particle Imaging","authors":"Shuangchen Li;Lizhi Zhang;Hongbo Guo;Jintao Li;Jingjing Yu;Xuelei He;Yizhe Zhao;Xiaowei He","doi":"10.1109/TCI.2025.3525948","DOIUrl":"https://doi.org/10.1109/TCI.2025.3525948","url":null,"abstract":"Magnetic particle imaging (MPI) is an emerging medical imaging technique that visualizes the spatial distribution of magnetic nanoparticles (MNPs). The system matrix (SM)-based reconstruction is enable to sensitively account for various system imperfections and offers high-fidelity volume images. Yet, the re-calibration of SMs is time-consuming when the imaging mode changes. Here, through adequately analyzing the properties of SMs, a channel- and spatial- gated attention mechanism based fully complex-valued neural network (CSA-FCN) was introduced for SM calibration in MPI. Specifically, a complex-valued constraint model for SM calibration is designed to focus on the complex-valued property of SM samples. Firstly, complex-valued convolution neural network (C-CNN) is leveraged to coarsely extract complex-valued features of the SMs. Additionally, in complex-valued domain, the channel- and spatial-based gated attention mechanisms are constructed to enhance features with lightweight advantage, named C-SEM and C-SAM respectively. C-SEM induces the network to suppress the noise expression at channel-level. C-SAM improves the network context sensitivity at spatial-level. Ultimately, aggregate the features at each level as global embedding representation, and calibrating the SM form local- to full-size through a pre-constructed consistency reconstruction layer. Analysis and experiments indicate that CSA-FCN significantly improves the efficiency of SM calibration and has excellent robustness against to different imaging modes.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"65-76"},"PeriodicalIF":4.2,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon Y. Feng;Rodrigo Ferrer-Chávez;Aviad Levis;Jason J. Wang;Katherine L. Bouman;William T. Freeman
{"title":"Exoplanet Imaging via Differentiable Rendering","authors":"Brandon Y. Feng;Rodrigo Ferrer-Chávez;Aviad Levis;Jason J. Wang;Katherine L. Bouman;William T. Freeman","doi":"10.1109/TCI.2025.3525971","DOIUrl":"https://doi.org/10.1109/TCI.2025.3525971","url":null,"abstract":"Direct imaging of exoplanets is crucial for advancing our understanding of planetary systems beyond our solar system, but it faces significant challenges due to the high contrast between host stars and their planets. Wavefront aberrations introduce speckles in the telescope science images, which are patterns of diffracted starlight that can mimic the appearance of planets, complicating the detection of faint exoplanet signals. Traditional post-processing methods, operating primarily in the image intensity domain, do not integrate wavefront sensing data. These data, measured mainly for adaptive optics corrections, have been overlooked as a potential resource for post-processing, partly due to the challenge of the evolving nature of wavefront aberrations. In this paper, we present a differentiable rendering approach that leverages these wavefront sensing data to improve exoplanet detection. Our differentiable renderer models wave-based light propagation through a coronagraphic telescope system, allowing gradient-based optimization to significantly improve starlight subtraction and increase sensitivity to faint exoplanets. Simulation experiments based on the James Webb Space Telescope configuration demonstrate the effectiveness of our approach, achieving substantial improvements in contrast and planet detection limits. Our results showcase how the computational advancements enabled by differentiable rendering can revitalize previously underexploited wavefront data, opening new avenues for enhancing exoplanet imaging and characterization.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"36-51"},"PeriodicalIF":4.2,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10824793","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}