{"title":"High-Throughput Decomposition-Inspired Deep Unfolding Network for Image Compressed Sensing","authors":"Tiancheng Li;Qiurong Yan;Yi Li;Jinwei Yan","doi":"10.1109/TCI.2025.3527880","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527880","url":null,"abstract":"Deep Unfolding Network (DUN) has achieved great success in the image Compressed Sensing (CS) field benefiting from its great interpretability and performance. However, existing DUNs suffer from limited information transmission capacity with increasingly complex structures, leading to undesirable results. Besides, current DUNs are mostly established based on one specific optimization algorithm, which hampers the development and understanding of DUN. In this paper, we propose a new unfolding formula combining the Approximate Message Passing algorithm (AMP) and Range-Nullspace Decomposition (RND), which offers new insights for DUN design. To maximize information transmission and utilization, we propose a novel High-Throughput Decomposition-Inspired Deep Unfolding Network (HTDIDUN) based on the new formula. Specifically, we design a powerful Nullspace Information Extractor (NIE) with high-throughput transmission and stacked residual channel attention blocks. By modulating the dimension of the feature space, we provide three implementations from small to large. Extensive experiments on natural and medical images manifest that our HTDIDUN family members outperform other state-of-the-art methods by a large margin. Our codes and pre-trained models are available on GitHub to facilitate further exploration.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"89-100"},"PeriodicalIF":4.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast and Robust Phase Retrieval via Deep Expectation-Consistent Approximation","authors":"Saurav K. Shastri;Philip Schniter","doi":"10.1109/TCI.2025.3527140","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527140","url":null,"abstract":"Accurately recovering images from phaseless measurements is a challenging and long-standing problem. In this work, we present “deepECpr,” which combines expectation-consistent (EC) approximation with deep denoising networks to surpass state-of-the-art phase-retrieval methods in both speed and accuracy. In addition to applying EC in a non-traditional manner, deepECpr includes a novel stochastic damping scheme that is inspired by recent diffusion methods. Like existing phase-retrieval methods based on plug-and-play priors, regularization by denoising, or diffusion, deepECpr iterates a denoising stage with a measurement-exploitation stage. But unlike existing methods, deepECpr requires far fewer denoiser calls. We compare deepECpr to the state-of-the-art prDeep (Metzler et al., 2018), Deep-ITA (Wang et al., 2020), DOLPH (Shoushtari et al., 2023), and Diffusion Posterior Sampling (Chung et al., 2023) methods for noisy phase-retrieval of color, natural, and unnatural grayscale images on oversampled-Fourier and coded-diffraction-pattern measurements and find improvements in both PSNR and SSIM with significantly fewer denoiser calls.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"116-128"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guided Depth Inpainting in ToF Image Sensing Based on Near Infrared Information","authors":"Amina Achaibou;Filiberto Pla;Javier Calpe","doi":"10.1109/TCI.2025.3527159","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527159","url":null,"abstract":"Accurate depth estimation is crucial in various computer vision applications, such as robotics, augmented reality, or autonomous driving. Despite the common use of Time-of-Flight (ToF) sensing systems, they still face challenges such as invalid pixels and missing depth values, particularly with low light reflectance, distant objects, or light-saturated conditions. Cameras using indirect ToF technology provide depth maps along with active infrared brightness images, which can offer a potential guide for depth restoration in fusion approaches. This study proposes a method for depth completion by combining depth and active infrared images in ToF systems. The approach is based on a belief propagation strategy to extend valid nearby information in missing depth regions, using the infrared gradient for depth consistency. Emphasis is placed on considering object edges, especially those coinciding with depth discontinuities, to approximate missing values. Empirical results demonstrate the efficiency and simplicity of the proposed algorithm, showcasing superior outcomes compared to other reference guided depth inpainting methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"154-169"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Signal Processing Society Information","authors":"","doi":"10.1109/TCI.2024.3525385","DOIUrl":"https://doi.org/10.1109/TCI.2024.3525385","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"C2-C2"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10833177","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"APANet: Asymmetrical Parallax Attention Network for Efficient Stereo Image Deraining","authors":"Chenglong Wang;Tao Yan;Weilong Huang;Xianglong Chen;Ke Xu;Xiaojun Chang","doi":"10.1109/TCI.2025.3527142","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527142","url":null,"abstract":"Recently, several stereo image deraining methods have been proposed to recover clean backgrounds from rainy stereo images by exploring and exploiting intra and inter-view information. Despite these methods have achieved great progress, they under-utilize the parallax information of input images, and do not take advantage of existing high-quality and abundant single image rainy datasets for learning. In this paper, we propose an effective and efficient network, named Asymmetrical Parallax Attention Network (APANet), for stereo image deraining. Specifically, to fully exploit the parallax information, we first adopt an External Attention Module (EAM), which consists of an external attention block with two learnable memories, and a gated feed-forward network, for achieving a better feature representation by incorporating the correlations between all samples. Subsequently, we propose an Asymmetrical Parallax Attention Module (APAM) to efficiently exploit the cross-attention between the features separately extracted from the left and right views, which filters useless stereo feature relationships with a well-designed mask calculated by excavating the parallax information (positional information of each matched pixel pair within a stereo image). For learning our network, we also construct an unpaired real-world stereo rainy image dataset, called StereoRealRain, which consists of some video clips (including 11803 image pairs). Moreover, we also introduce a Single-to-Stereo Image Deraining Distillation strategy for transferring the knowledge learned from single images deraining to stereo images deraining to improve the generalization ability of our network. Extensive experiments conducted on synthetic and real-world stereo rainy datasets demonstrate the effectiveness of our method.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"101-115"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuangchen Li;Lizhi Zhang;Hongbo Guo;Jintao Li;Jingjing Yu;Xuelei He;Yizhe Zhao;Xiaowei He
{"title":"CSA-FCN: Channel- and Spatial-Gated Attention Mechanism Based Fully Complex-Valued Neural Network for System Matrix Calibration in Magnetic Particle Imaging","authors":"Shuangchen Li;Lizhi Zhang;Hongbo Guo;Jintao Li;Jingjing Yu;Xuelei He;Yizhe Zhao;Xiaowei He","doi":"10.1109/TCI.2025.3525948","DOIUrl":"https://doi.org/10.1109/TCI.2025.3525948","url":null,"abstract":"Magnetic particle imaging (MPI) is an emerging medical imaging technique that visualizes the spatial distribution of magnetic nanoparticles (MNPs). The system matrix (SM)-based reconstruction is enable to sensitively account for various system imperfections and offers high-fidelity volume images. Yet, the re-calibration of SMs is time-consuming when the imaging mode changes. Here, through adequately analyzing the properties of SMs, a channel- and spatial- gated attention mechanism based fully complex-valued neural network (CSA-FCN) was introduced for SM calibration in MPI. Specifically, a complex-valued constraint model for SM calibration is designed to focus on the complex-valued property of SM samples. Firstly, complex-valued convolution neural network (C-CNN) is leveraged to coarsely extract complex-valued features of the SMs. Additionally, in complex-valued domain, the channel- and spatial-based gated attention mechanisms are constructed to enhance features with lightweight advantage, named C-SEM and C-SAM respectively. C-SEM induces the network to suppress the noise expression at channel-level. C-SAM improves the network context sensitivity at spatial-level. Ultimately, aggregate the features at each level as global embedding representation, and calibrating the SM form local- to full-size through a pre-constructed consistency reconstruction layer. Analysis and experiments indicate that CSA-FCN significantly improves the efficiency of SM calibration and has excellent robustness against to different imaging modes.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"65-76"},"PeriodicalIF":4.2,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon Y. Feng;Rodrigo Ferrer-Chávez;Aviad Levis;Jason J. Wang;Katherine L. Bouman;William T. Freeman
{"title":"Exoplanet Imaging via Differentiable Rendering","authors":"Brandon Y. Feng;Rodrigo Ferrer-Chávez;Aviad Levis;Jason J. Wang;Katherine L. Bouman;William T. Freeman","doi":"10.1109/TCI.2025.3525971","DOIUrl":"https://doi.org/10.1109/TCI.2025.3525971","url":null,"abstract":"Direct imaging of exoplanets is crucial for advancing our understanding of planetary systems beyond our solar system, but it faces significant challenges due to the high contrast between host stars and their planets. Wavefront aberrations introduce speckles in the telescope science images, which are patterns of diffracted starlight that can mimic the appearance of planets, complicating the detection of faint exoplanet signals. Traditional post-processing methods, operating primarily in the image intensity domain, do not integrate wavefront sensing data. These data, measured mainly for adaptive optics corrections, have been overlooked as a potential resource for post-processing, partly due to the challenge of the evolving nature of wavefront aberrations. In this paper, we present a differentiable rendering approach that leverages these wavefront sensing data to improve exoplanet detection. Our differentiable renderer models wave-based light propagation through a coronagraphic telescope system, allowing gradient-based optimization to significantly improve starlight subtraction and increase sensitivity to faint exoplanets. Simulation experiments based on the James Webb Space Telescope configuration demonstrate the effectiveness of our approach, achieving substantial improvements in contrast and planet detection limits. Our results showcase how the computational advancements enabled by differentiable rendering can revitalize previously underexploited wavefront data, opening new avenues for enhancing exoplanet imaging and characterization.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"36-51"},"PeriodicalIF":4.2,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10824793","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"List of Reviewers","authors":"","doi":"10.1109/TCI.2024.3523050","DOIUrl":"https://doi.org/10.1109/TCI.2024.3523050","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1829-1831"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10819699","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cytopathology Image Super-Resolution of Portable Microscope Based on Convolutional Window-Integration Transformer","authors":"Jinyu Zhang;Shenghua Cheng;Xiuli Liu;Ning Li;Gong Rao;Shaoqun Zeng","doi":"10.1109/TCI.2024.3522761","DOIUrl":"https://doi.org/10.1109/TCI.2024.3522761","url":null,"abstract":"High-quality cytopathology images are the guarantee of cervical cancer computer-aided screening. However, obtaining such images is dependent on expensive devices, which hinders the screening popularization in less developed areas. In this study, we propose a convolutional window-integration Transformer for cytopathology image super-resolution (SR) of portable microscope. We use self-attention within the window to integrate patches, and then design a convolutional window-integration feed-forward network with two 5 × 5 size kernels to achieve cross-window patch integration. This design avoids long-range self-attention and facilitates SR local mapping learning. Besides, we design a multi-layer feature fusion in feature extraction to enhance high-frequency details, achieving better SR reconstruction. Finally, we register and establish a dataset of 239,100 paired portable microscope images and standard microscope images based on feature point matching. A series of experiments demonstrate that our model has the minimum parameter number and outperforms state-of-the-art CNN-based and recent Transformer-based SR models with PSNR improvement of 0.09–0.53 dB. We release this dataset and codes publicly to promote the development of computational cytopathology imaging.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"77-88"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentin Debarnot;Vinith Kishore;Ricardo D. Righetto;Ivan Dokmanić
{"title":"Ice-Tide: Implicit Cryo-ET Imaging and Deformation Estimation","authors":"Valentin Debarnot;Vinith Kishore;Ricardo D. Righetto;Ivan Dokmanić","doi":"10.1109/TCI.2024.3519805","DOIUrl":"https://doi.org/10.1109/TCI.2024.3519805","url":null,"abstract":"We introduce ICE-TIDE, a method for cryogenic electron tomography (cryo-ET) that simultaneously aligns observations and reconstructs a high-resolution volume. The alignment of tilt series in cryo-ET is a major problem limiting the resolution of reconstructions. ICE-TIDE relies on an efficient coordinate-based implicit neural representation of the volume which enables it to directly parameterize deformations and align the projections. Furthermore, the implicit network acts as an effective regularizer, allowing for high-quality reconstruction at low signal-to-noise ratios as well as partially restoring the missing wedge information. We compare the performance of ICE-TIDE to existing approaches on realistic simulated volumes where the significant gains in resolution and accuracy of recovering deformations can be precisely evaluated. Finally, we demonstrate ICE-TIDE's ability to perform on experimental data sets.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"24-35"},"PeriodicalIF":4.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}