{"title":"IEEE Signal Processing Society Information","authors":"","doi":"10.1109/TCI.2024.3525385","DOIUrl":"https://doi.org/10.1109/TCI.2024.3525385","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"C2-C2"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10833177","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"APANet: Asymmetrical Parallax Attention Network for Efficient Stereo Image Deraining","authors":"Chenglong Wang;Tao Yan;Weilong Huang;Xianglong Chen;Ke Xu;Xiaojun Chang","doi":"10.1109/TCI.2025.3527142","DOIUrl":"https://doi.org/10.1109/TCI.2025.3527142","url":null,"abstract":"Recently, several stereo image deraining methods have been proposed to recover clean backgrounds from rainy stereo images by exploring and exploiting intra and inter-view information. Despite these methods have achieved great progress, they under-utilize the parallax information of input images, and do not take advantage of existing high-quality and abundant single image rainy datasets for learning. In this paper, we propose an effective and efficient network, named Asymmetrical Parallax Attention Network (APANet), for stereo image deraining. Specifically, to fully exploit the parallax information, we first adopt an External Attention Module (EAM), which consists of an external attention block with two learnable memories, and a gated feed-forward network, for achieving a better feature representation by incorporating the correlations between all samples. Subsequently, we propose an Asymmetrical Parallax Attention Module (APAM) to efficiently exploit the cross-attention between the features separately extracted from the left and right views, which filters useless stereo feature relationships with a well-designed mask calculated by excavating the parallax information (positional information of each matched pixel pair within a stereo image). For learning our network, we also construct an unpaired real-world stereo rainy image dataset, called StereoRealRain, which consists of some video clips (including 11803 image pairs). Moreover, we also introduce a Single-to-Stereo Image Deraining Distillation strategy for transferring the knowledge learned from single images deraining to stereo images deraining to improve the generalization ability of our network. Extensive experiments conducted on synthetic and real-world stereo rainy datasets demonstrate the effectiveness of our method.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"101-115"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuangchen Li;Lizhi Zhang;Hongbo Guo;Jintao Li;Jingjing Yu;Xuelei He;Yizhe Zhao;Xiaowei He
{"title":"CSA-FCN: Channel- and Spatial-Gated Attention Mechanism Based Fully Complex-Valued Neural Network for System Matrix Calibration in Magnetic Particle Imaging","authors":"Shuangchen Li;Lizhi Zhang;Hongbo Guo;Jintao Li;Jingjing Yu;Xuelei He;Yizhe Zhao;Xiaowei He","doi":"10.1109/TCI.2025.3525948","DOIUrl":"https://doi.org/10.1109/TCI.2025.3525948","url":null,"abstract":"Magnetic particle imaging (MPI) is an emerging medical imaging technique that visualizes the spatial distribution of magnetic nanoparticles (MNPs). The system matrix (SM)-based reconstruction is enable to sensitively account for various system imperfections and offers high-fidelity volume images. Yet, the re-calibration of SMs is time-consuming when the imaging mode changes. Here, through adequately analyzing the properties of SMs, a channel- and spatial- gated attention mechanism based fully complex-valued neural network (CSA-FCN) was introduced for SM calibration in MPI. Specifically, a complex-valued constraint model for SM calibration is designed to focus on the complex-valued property of SM samples. Firstly, complex-valued convolution neural network (C-CNN) is leveraged to coarsely extract complex-valued features of the SMs. Additionally, in complex-valued domain, the channel- and spatial-based gated attention mechanisms are constructed to enhance features with lightweight advantage, named C-SEM and C-SAM respectively. C-SEM induces the network to suppress the noise expression at channel-level. C-SAM improves the network context sensitivity at spatial-level. Ultimately, aggregate the features at each level as global embedding representation, and calibrating the SM form local- to full-size through a pre-constructed consistency reconstruction layer. Analysis and experiments indicate that CSA-FCN significantly improves the efficiency of SM calibration and has excellent robustness against to different imaging modes.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"65-76"},"PeriodicalIF":4.2,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon Y. Feng;Rodrigo Ferrer-Chávez;Aviad Levis;Jason J. Wang;Katherine L. Bouman;William T. Freeman
{"title":"Exoplanet Imaging via Differentiable Rendering","authors":"Brandon Y. Feng;Rodrigo Ferrer-Chávez;Aviad Levis;Jason J. Wang;Katherine L. Bouman;William T. Freeman","doi":"10.1109/TCI.2025.3525971","DOIUrl":"https://doi.org/10.1109/TCI.2025.3525971","url":null,"abstract":"Direct imaging of exoplanets is crucial for advancing our understanding of planetary systems beyond our solar system, but it faces significant challenges due to the high contrast between host stars and their planets. Wavefront aberrations introduce speckles in the telescope science images, which are patterns of diffracted starlight that can mimic the appearance of planets, complicating the detection of faint exoplanet signals. Traditional post-processing methods, operating primarily in the image intensity domain, do not integrate wavefront sensing data. These data, measured mainly for adaptive optics corrections, have been overlooked as a potential resource for post-processing, partly due to the challenge of the evolving nature of wavefront aberrations. In this paper, we present a differentiable rendering approach that leverages these wavefront sensing data to improve exoplanet detection. Our differentiable renderer models wave-based light propagation through a coronagraphic telescope system, allowing gradient-based optimization to significantly improve starlight subtraction and increase sensitivity to faint exoplanets. Simulation experiments based on the James Webb Space Telescope configuration demonstrate the effectiveness of our approach, achieving substantial improvements in contrast and planet detection limits. Our results showcase how the computational advancements enabled by differentiable rendering can revitalize previously underexploited wavefront data, opening new avenues for enhancing exoplanet imaging and characterization.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"36-51"},"PeriodicalIF":4.2,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10824793","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"List of Reviewers","authors":"","doi":"10.1109/TCI.2024.3523050","DOIUrl":"https://doi.org/10.1109/TCI.2024.3523050","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1829-1831"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10819699","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cytopathology Image Super-Resolution of Portable Microscope Based on Convolutional Window-Integration Transformer","authors":"Jinyu Zhang;Shenghua Cheng;Xiuli Liu;Ning Li;Gong Rao;Shaoqun Zeng","doi":"10.1109/TCI.2024.3522761","DOIUrl":"https://doi.org/10.1109/TCI.2024.3522761","url":null,"abstract":"High-quality cytopathology images are the guarantee of cervical cancer computer-aided screening. However, obtaining such images is dependent on expensive devices, which hinders the screening popularization in less developed areas. In this study, we propose a convolutional window-integration Transformer for cytopathology image super-resolution (SR) of portable microscope. We use self-attention within the window to integrate patches, and then design a convolutional window-integration feed-forward network with two 5 × 5 size kernels to achieve cross-window patch integration. This design avoids long-range self-attention and facilitates SR local mapping learning. Besides, we design a multi-layer feature fusion in feature extraction to enhance high-frequency details, achieving better SR reconstruction. Finally, we register and establish a dataset of 239,100 paired portable microscope images and standard microscope images based on feature point matching. A series of experiments demonstrate that our model has the minimum parameter number and outperforms state-of-the-art CNN-based and recent Transformer-based SR models with PSNR improvement of 0.09–0.53 dB. We release this dataset and codes publicly to promote the development of computational cytopathology imaging.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"77-88"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentin Debarnot;Vinith Kishore;Ricardo D. Righetto;Ivan Dokmanić
{"title":"Ice-Tide: Implicit Cryo-ET Imaging and Deformation Estimation","authors":"Valentin Debarnot;Vinith Kishore;Ricardo D. Righetto;Ivan Dokmanić","doi":"10.1109/TCI.2024.3519805","DOIUrl":"https://doi.org/10.1109/TCI.2024.3519805","url":null,"abstract":"We introduce ICE-TIDE, a method for cryogenic electron tomography (cryo-ET) that simultaneously aligns observations and reconstructs a high-resolution volume. The alignment of tilt series in cryo-ET is a major problem limiting the resolution of reconstructions. ICE-TIDE relies on an efficient coordinate-based implicit neural representation of the volume which enables it to directly parameterize deformations and align the projections. Furthermore, the implicit network acts as an effective regularizer, allowing for high-quality reconstruction at low signal-to-noise ratios as well as partially restoring the missing wedge information. We compare the performance of ICE-TIDE to existing approaches on realistic simulated volumes where the significant gains in resolution and accuracy of recovering deformations can be precisely evaluated. Finally, we demonstrate ICE-TIDE's ability to perform on experimental data sets.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"24-35"},"PeriodicalIF":4.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Luo;Yue Cai;Jie Ling;Yingdan Ji;Yanmei Tie;Shun Yao
{"title":"Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction","authors":"Yu Luo;Yue Cai;Jie Ling;Yingdan Ji;Yanmei Tie;Shun Yao","doi":"10.1109/TCI.2024.3518210","DOIUrl":"https://doi.org/10.1109/TCI.2024.3518210","url":null,"abstract":"Magnetic Resonance Imaging (MRI) is a widely used imaging technique, however it has the limitation of long scanning time. Though previous model-based and learning-based MRI reconstruction methods have shown promising performance, most of them have not fully utilized the edge prior of MR images, and there is still much room for improvement. In this paper, we build a joint edge optimization model that not only incorporates individual regularizers specific to both the MR image and the edges, but also enforces a co-regularizer to effectively establish a stronger correlation between them. Specifically, the edge information is defined through a non-edge probability map to guide the image reconstruction during the optimization process. Meanwhile, the regularizers pertaining to images and edges are incorporated into a deep unfolding network to automatically learn their respective inherent a-priori information. Numerical experiments, consisting of multi-coil and single-coil MRI data with different sampling schemes at a variety of sampling factors, demonstrate that the proposed method outperforms other state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"11-23"},"PeriodicalIF":4.2,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FgC2F-UDiff: Frequency-Guided and Coarse-to-Fine Unified Diffusion Model for Multi-Modality Missing MRI Synthesis","authors":"Xiaojiao Xiao;Qinmin Vivian Hu;Guanghui Wang","doi":"10.1109/TCI.2024.3516574","DOIUrl":"https://doi.org/10.1109/TCI.2024.3516574","url":null,"abstract":"Multi-modality magnetic resonance imaging (MRI) is essential for the diagnosis and treatment of brain tumors. However, missing modalities are commonly observed due to limitations in scan time, scan corruption, artifacts, motion, and contrast agent intolerance. Synthesis of missing MRI has been a means to address the limitations of modality insufficiency in clinical practice and research. However, there are still some challenges, such as poor generalization, inaccurate non-linear mapping, and slow processing speeds. To address the aforementioned issues, we propose a novel unified synthesis model, the \u0000<bold>Frequency-guided and Coarse-to-fine Unified Diffusion Model (FgC2F-UDiff)</b>\u0000, designed for multiple inputs and outputs. Specifically, the Coarse-to-fine Unified Network (CUN) fully exploits the iterative denoising properties of diffusion models, from global to detail, by dividing the denoising process into two stages—coarse and fine—to enhance the fidelity of synthesized images. Secondly, the Frequency-guided Collaborative Strategy (FCS) harnesses appropriate frequency information as prior knowledge to guide the learning of a unified, highly non-linear mapping. Thirdly, the Specific-acceleration Hybrid Mechanism (SHM) integrates specific mechanisms to accelerate the diffusion model and enhance the feasibility of many-to-many synthesis. Extensive experimental evaluations have demonstrated that our proposed FgC2F-UDiff model achieves superior performance on two datasets, validated through a comprehensive assessment that includes both qualitative observations and quantitative metrics, such as PSNR SSIM, LPIPS, and FID.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1815-1828"},"PeriodicalIF":4.2,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhangxi Xiong;Wei Li;Hanzheng Wang;Baochang Zhang;James E. Fowler
{"title":"A Multi-Stage Progressive Network for Hyperspectral Image Demosaicing and Denoising","authors":"Zhangxi Xiong;Wei Li;Hanzheng Wang;Baochang Zhang;James E. Fowler","doi":"10.1109/TCI.2024.3515844","DOIUrl":"https://doi.org/10.1109/TCI.2024.3515844","url":null,"abstract":"While snapshot hyperspectral cameras are cheaper and faster than imagers based on pushbroom or whiskbroom spatial scanning, the output imagery from a snapshot camera typically has different spectral bands mapped to different spatial locations in a mosaic pattern, requiring a demosaicing process to be applied to generate the desired hyperspectral image with full spatial and spectral resolution. However, many existing demosaicing algorithms suffer common artifacts such as periodic striping or other forms of noise. To ameliorate these issues, a hyperspectral demosaicing framework that couples a preliminary demosaicing network with a separate multi-stage progressive denoising network is proposed, with both networks employing transformer and attention mechanisms. A multi-term loss function permits supervised network training to monitor not only performance of the preliminary demosaicing but also denoising at each stage. An extensive collection of experimental results demonstrate that the proposed approach produces demosaiced images with not only fewer visual artifacts but also improved performance with respect to several quantitative measures as compared to other state-of-the-art demosaicing methods from recent literature.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1-10"},"PeriodicalIF":4.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}