IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
SLBL-PU: Shadow-Based Layer-By-Layer Phase Unwrapping for Efficient 3D Measurement SLBL-PU:基于阴影的逐层相位展开,用于高效的3D测量
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-24 DOI: 10.1109/TCI.2025.3544084
Ruiming Yu;Hongshan Yu;Haiqiang Xu;Wei Sun;Naveed Akhtar;Yaonan Wang
{"title":"SLBL-PU: Shadow-Based Layer-By-Layer Phase Unwrapping for Efficient 3D Measurement","authors":"Ruiming Yu;Hongshan Yu;Haiqiang Xu;Wei Sun;Naveed Akhtar;Yaonan Wang","doi":"10.1109/TCI.2025.3544084","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544084","url":null,"abstract":"Phase-shifting (PS) based structured light technology shows excellent 3D perception performance. However, it requires projecting a extensive array of patterns, imposing constraints on the measurement space, or embedding additional signals for phase unwrapping (PU), leading to motion artifacts and low robustness. To surmount these challenges, we propose a shadow-based, layer-by-layer phase unwrapping (SLBL-PU) method, which enables absolute phase recovery for deep objects without the need for any supplementary patterns. In the initial phase, attention is focused on a novel truncation feature within the local phase, facilitating the use of iterative PUs to derive the modulated phase. Inspired by shading theory, in the second phase, the absolute phase is restored based on the geometric relationship between the imaging system and the object shadows. Additionally, by incorporating a time-division multiplexing strategy, the efficiency of 3D reconstruction in dynamic scenes is further tripled. In experiments involving different depths, phase modulation, complex colored, and dynamic scenes, the proposed method demonstrated superior performance. Specifically, in static environments (0 mm/s), the proposed approach yields greater measurement accuracy (0.020 mm and 0.195 mm) than does the traditional spatial domain modulation (PS) method. In dynamic environments (15 mm/s), the proposed approach theoretically utilizes at least three patterns, with a defect rate lower than that of the nine-pattern, three-frequency PS method (8.58% and 14.68%).","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"452-467"},"PeriodicalIF":4.2,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Single Pixel Imaging by Using Adaptive Jointly Optimized Conditional Diffusion 自适应联合优化条件扩散增强单像素成像
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-20 DOI: 10.1109/TCI.2025.3544087
Jiawei Dong;Hong Zeng;Sen Dong;Weining Chen;Qianxi Li;Jianzhong Cao;Qiurong Yan;Hao Wang
{"title":"Enhanced Single Pixel Imaging by Using Adaptive Jointly Optimized Conditional Diffusion","authors":"Jiawei Dong;Hong Zeng;Sen Dong;Weining Chen;Qianxi Li;Jianzhong Cao;Qiurong Yan;Hao Wang","doi":"10.1109/TCI.2025.3544087","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544087","url":null,"abstract":"Single-pixel imaging can reconstruct the original image at a low measurement rate (MR), and the target can be measured and reconstructed in low-light environments by capturing the light intensity information using a single-photon detector. Optimizing reconstruction results at low MR has become a focal point of research aimed at enhancing measurement efficiency. The application of neural network has significantly improved reconstruction quality, but the performance still requires further enhancement. In this paper, a Diffusion Single Pixel Imaging Model (DSPIM) method is proposed. The conditional diffusion model is utilized in the training and reconstruction processes of single-pixel imaging and is jointly optimized with an autoencoder network. This approach simulates the measurement and preliminary reconstruction of images, which are incorporated into the diffusion process as conditions. The noises and features are learned through a designed loss function that consists of predicted noise loss and measurement accuracy loss, allowing the reconstruction to perform well at very low MR. Besides, an adaptive regularization coefficients adjustment method (ARCA) has been designed for more effective optimization. Finally, the learned weights are loaded into the single photon counting system as a measurement matrix, demonstrating that the blurriness caused by insufficient features at low MR is effectively addressed using our methods, resulting in clearer targets and well-distinguished features.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"289-304"},"PeriodicalIF":4.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RLNet: Reshaping Learning Network for Accurate Low-Light Light Field Image Enhancement RLNet:用于精确弱光光场图像增强的重塑学习网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-20 DOI: 10.1109/TCI.2025.3544070
Xingzheng Wang;Jiahao Lin;Kaiqiang Chen;Wenhao Huang;Zixuan Wang;Yuanlong Deng
{"title":"RLNet: Reshaping Learning Network for Accurate Low-Light Light Field Image Enhancement","authors":"Xingzheng Wang;Jiahao Lin;Kaiqiang Chen;Wenhao Huang;Zixuan Wang;Yuanlong Deng","doi":"10.1109/TCI.2025.3544070","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544070","url":null,"abstract":"Existing low-light light field (LF) image enhancement methods have difficulty in recovering image details due to underutilization of the 4D LF structure. Therefore, we propose a novel reshaping learning network named RLNet to better exploit the 4D LF structure. The main idea of the network is to divide and conquer, that is, to combine complementary traits from multiple 2D representations reshaped by LF data. Specifically, the network firstly learns local information and global information from neighboring views and all views, respectively. Then, based on the above extracted information, reshaping learning modules (RLMs) are employed to fully exploit the 4D LF structure, which reshape LF images into different 2D representations, and then simultaneously mine their corresponding traits. In addition, the network also integrates contrast attention modules (CAMs) to enhance image details. To validate our idea, we comprehensively evaluate the performance of our RLNet on three publicly available datasets with different illumination. Experimental results demonstrate the superior performance of our method. Our method achieves an average PSNR gain of 0.45 dB compared to current state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"421-434"},"PeriodicalIF":4.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater Image Enhancement and Attenuation Restoration Based on Depth and Backscatter Estimation 基于深度和后向散射估计的水下图像增强与衰减恢复
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-20 DOI: 10.1109/TCI.2025.3544065
Yi-Zeng Hsieh;Ming-Ching Chang
{"title":"Underwater Image Enhancement and Attenuation Restoration Based on Depth and Backscatter Estimation","authors":"Yi-Zeng Hsieh;Ming-Ching Chang","doi":"10.1109/TCI.2025.3544065","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544065","url":null,"abstract":"Underwater image analytic technologies is important to study in-water imagery in oceanography. Due to the poor lighting conditions and severe scattering and attenuation of light, underwater image quality is heavily reduced in such environment. Therefore, underwater image enhancement has always been an essential step in the analysis pipeline. We develop an Underwater Image Enhancement and Attenuation Restoration (UIEAR) algorithm from a RGB image input based on 3D depth and backscatter estimation. The proposed underwater image enhancement method achieves superior performance with light computational requirements, making it easy to deploy on edge devices. We provide the following contributions: (1) Our image enhancement is based on depth estimation using a new smooth operator on RGB pixels, which provides 3D spatial information for improved backscatter estimation and attenuation restoration. (2) We develop an improved imaging model by considering parameters relative to the camera and the local light source to estimate the attenuation and the backscatter effects. Our light source estimation is constructed from a local neighborhood of pixels to avoid distortion of the backscatter and attenuation estimation. (3) We adopt white balance adjustment to enhance underwater pixels and better match real-world colors. Our method improves general underwater image analysis including object detection and segmentation. Experimental results demonstrate the effectiveness of our algorithm in restoring and enhancing underwater images.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"321-332"},"PeriodicalIF":4.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction 深度图像先验分析及自导引在图像重建中的应用
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-19 DOI: 10.1109/TCI.2025.3540706
Shijun Liang;Evan Bell;Qing Qu;Rongrong Wang;Saiprasad Ravishankar
{"title":"Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction","authors":"Shijun Liang;Evan Bell;Qing Qu;Rongrong Wang;Saiprasad Ravishankar","doi":"10.1109/TCI.2025.3540706","DOIUrl":"https://doi.org/10.1109/TCI.2025.3540706","url":null,"abstract":"The ability of deep image prior (DIP) to recover high-quality images from incomplete or corrupted measurements has made it popular in inverse problems in image restoration and medical imaging, including magnetic resonance imaging (MRI). However, conventional DIP suffers from severe overfitting and spectral bias effects. In this work, we first provide an analysis of how DIP recovers information from undersampled imaging measurements by analyzing the training dynamics of the underlying networks in the kernel regime for different architectures. This study sheds light on important underlying properties for DIP-based recovery. Current research suggests that incorporating a reference image as network input can enhance DIP's performance in image reconstruction compared to using random inputs. However, obtaining suitable reference images requires supervision and raises practical difficulties. In an attempt to overcome this obstacle, we further introduce a self-driven reconstruction process that concurrently optimizes both the network weights and the input while eliminating the need for training data. Our method incorporates a novel denoiser regularization term which enables robust and stable joint estimation of both the network input and reconstructed image. We demonstrate that our self-guided method surpasses both the original DIP and modern supervised methods in terms of MR image reconstruction performance and outperforms previous DIP-based schemes for image inpainting.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"435-451"},"PeriodicalIF":4.2,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full Matrix Wavefield Migration for Layered Photoacoustic Imaging 层状光声成像的全矩阵波场偏移
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-17 DOI: 10.1109/TCI.2025.3530256
Kang Qin;Meng Cao;Peng Ren;Fengchen Luo;Siyu Liu
{"title":"Full Matrix Wavefield Migration for Layered Photoacoustic Imaging","authors":"Kang Qin;Meng Cao;Peng Ren;Fengchen Luo;Siyu Liu","doi":"10.1109/TCI.2025.3530256","DOIUrl":"https://doi.org/10.1109/TCI.2025.3530256","url":null,"abstract":"Medium heterogeneity poses a severe challenge to image reconstruction in transcranial photoacoustic tomography, which cannot be fully addressed by the homogeneous phase shift migration method. Although the existing methods can enhancethe imaging quality to a certain extent, they are limited by the large approximation errors and low computational efficiency. To further improve imaging performance and calculation speed, this paper proposes full matrix wavefield migration, which takes into account both lateral and longitudinal variations of speed of sound (SOS). Unlike the PSM method which relies on a layer-by-layer migration framework, the proposed approach reformulates the SOS map across the propagation medium into a spatial matrix of SOS. By means of extrapolating wavefield data in the wavenumber domain and correcting phase deviations in the spatial domain, this method reduces the image distortion caused by SOS irregularity and suppresses artifacts in reconstructed images. Moreover, the calculation process is further optimized to eliminate redundancy. Simulation and experimental results demonstrate that full matrix wavefield migration method improves lateral resolution (up to 21.24%) and computational efficiency (about 19.84%) compared to the previous methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"179-188"},"PeriodicalIF":4.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast-Aware Network With Aggregated-Interacted Transformer and Multi-Granularity Aligned Contrastive Learning for Synthesizing Contrast-Enhanced Abdomen CT Imaging 基于聚合交互变压器和多粒度对齐对比度学习的对比度感知网络合成腹部CT增强图像
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-10 DOI: 10.1109/TCI.2025.3540711
Qikui Zhu;Andrew L. Wentland;Shuo Li
{"title":"Contrast-Aware Network With Aggregated-Interacted Transformer and Multi-Granularity Aligned Contrastive Learning for Synthesizing Contrast-Enhanced Abdomen CT Imaging","authors":"Qikui Zhu;Andrew L. Wentland;Shuo Li","doi":"10.1109/TCI.2025.3540711","DOIUrl":"https://doi.org/10.1109/TCI.2025.3540711","url":null,"abstract":"Contrast-enhanced CT imaging (CECTI) is crucial for the diagnosis of patients with liver tumors. Therefore, if CECTI can be synthesized using only non-contrast CT imaging (NCCTI), it will provide significant clinical advantages. We propose a novel contrast-aware network with Aggregated-interacted Transformer and Multi-granularity aligned contrastive learning (AMNet) for CECTI synthesizing, which enables synthesizing CECTI for the first time. AMNet mitigates the challenges associated with high-risk, time-consuming, expensive, and radiation-intensive procedures required for obtaining CECTI. Furthermore, it overcomes the challenges of low contrast and low sensitivity in CT imaging through four key innovations to address these challenges: 1) The Aggregated-Interacted Transformer (AI-Transformer) introduces two mechanisms: multi-scale token aggregation and cross-token interaction. These enable long-range dependencies between multi-scale cross-tokens, facilitating the extraction of discriminative structural and content features of tissues, thereby addressing the low-contrast challenge. 2) The Multi-granularity Aligned Contrastive Learning (MACL) constructs a new regularization term for exploiting intra-domain compact and inter-domain separable features to improve the model's sensitivity to chemical contrast agents (CAs) and overcome the low sensitivity challenge. 3) The Contrast-Aware Adaptive Layer (CAL) imbues the AMNet with contrast-aware abilities that adaptively adjust the contrast information of various regions to achieve perfect synthesis. 4) The dual-stream discriminator (DSD) adopts an ensemble strategy to evaluate the synthetic CECTI from multiple perspectives. AMNet is validated using two corresponding CT imaging modalities (pre-contrast and portal venous-phase), an essential procedure for liver tumor biopsy. Experimental results demonstrate that our AMNet has successfully synthesized CECTI without chemical CA injections for the first time.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"277-288"},"PeriodicalIF":4.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Static CT With Sources and Detectors Distributed in a Multi-Segment Manner: System Analysis and Analytical Reconstruction 以多段方式分布的源和检测器的静态CT:系统分析和分析重建
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-10 DOI: 10.1109/TCI.2025.3540707
Changyu Chen;Yuxiang Xing;Li Zhang;Zhiqiang Chen
{"title":"Static CT With Sources and Detectors Distributed in a Multi-Segment Manner: System Analysis and Analytical Reconstruction","authors":"Changyu Chen;Yuxiang Xing;Li Zhang;Zhiqiang Chen","doi":"10.1109/TCI.2025.3540707","DOIUrl":"https://doi.org/10.1109/TCI.2025.3540707","url":null,"abstract":"In this work, we investigate the feature of projection sampling and analytical reconstruction algorithms for a Static CT with sources and detectors distributed in a Multi-Segment manner (MS-StaticCT). MS-StaticCT is a generalized configuration of previous static linear CT systems offering enhanced design flexibility and utilization efficiency in both X-ray source and detector components. By analyzing the imaging geometry of single-segment source and detector pairs, we delved into the Radon space properties of MS-StaticCT and proposed a data sufficiency condition for system design. To explore the impact of the unique sampling characteristics of MS-StaticCT on reconstruction quality, we derived analytical algorithms under two popular pipelines filtered-backprojection (MS-FBP) and differentiated backprojection filtration (MS-DBF), and assessed their performance. Due to the non-uniform sampling and singular points between segments, the global filtration process of MS-FBP requires local rebinning. The local nature of differentiation enables convenient filtration without rebinning. Besides, to address insufficient data caused by optical obstruction by sources and detectors, we incorporated multiple imaging planes and designed a generalized weighting function that efficiently utilizes conjugate projections. Simulation studies on numerical phantoms and clinical CT data demonstrate the feasibility of MS-StaticCT and the proposed reconstruction algorithms. The results highlighted MS-DBF's superiority in accuracy and spatial resolution for multi-segment geometries without compromising noise performance compared to MS-FBP whose performance depends on the number of detector segments involved for each focal spot. Our study provides a comprehensive understanding of the essential data structure and basic reconstruction tailored for systems characterized by linear source trajectories and detectors.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"251-264"},"PeriodicalIF":4.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GroupCDL: Interpretable Denoising and Compressed Sensing MRI via Learned Group-Sparsity and Circulant Attention 基于学习群稀疏性和循环注意的可解释去噪和压缩感知MRI
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-05 DOI: 10.1109/TCI.2025.3539021
Nikola Janjušević;Amirhossein Khalilian-Gourtani;Adeen Flinker;Li Feng;Yao Wang
{"title":"GroupCDL: Interpretable Denoising and Compressed Sensing MRI via Learned Group-Sparsity and Circulant Attention","authors":"Nikola Janjušević;Amirhossein Khalilian-Gourtani;Adeen Flinker;Li Feng;Yao Wang","doi":"10.1109/TCI.2025.3539021","DOIUrl":"https://doi.org/10.1109/TCI.2025.3539021","url":null,"abstract":"Nonlocal self-similarity within images has become an increasingly popular prior in deep-learning models. Despite their successful image restoration performance, such models remain largely uninterpretable due to their black-box construction. Our previous studies have shown that interpretable construction of a fully convolutional denoiser (CDLNet), with performance on par with state-of-the-art black-box counterparts, is achievable by unrolling a convolutional dictionary learning algorithm. In this manuscript, we seek an interpretable construction of a convolutional network with a nonlocal self-similarity prior that performs on par with black-box nonlocal models. We show that such an architecture can be effectively achieved by upgrading the <inline-formula><tex-math>$ell _{1}$</tex-math></inline-formula> sparsity prior (soft-thresholding) of CDLNet to an image-adaptive group-sparsity prior (group-thresholding). The proposed learned group-thresholding makes use of nonlocal attention to perform spatially varying soft-thresholding on the latent representation. To enable effective training and inference on large images with global artifacts, we propose a novel <italic>circulant-sparse attention</i>. We achieve competitive natural-image denoising performance compared to black-box nonlocal DNNs and transformers. The interpretable construction of our network allows for a straightforward extension to Compressed Sensing MRI (CS-MRI), yielding state-of-the-art performance. Lastly, we show robustness to noise-level mismatches between training and inference for denoising and CS-MRI reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"201-212"},"PeriodicalIF":4.2,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascaded Frequency-Encoded Multi-Scale Neural Fields for Sparse-View CT Reconstruction 用于稀疏视图CT重建的级联频率编码多尺度神经场
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-31 DOI: 10.1109/TCI.2025.3536078
Jia Wu;Jinzhao Lin;Yu Pang;Xiaoming Jiang;Xinwei Li;Hongying Meng;Yamei Luo;Lu Yang;Zhangyong Li
{"title":"Cascaded Frequency-Encoded Multi-Scale Neural Fields for Sparse-View CT Reconstruction","authors":"Jia Wu;Jinzhao Lin;Yu Pang;Xiaoming Jiang;Xinwei Li;Hongying Meng;Yamei Luo;Lu Yang;Zhangyong Li","doi":"10.1109/TCI.2025.3536078","DOIUrl":"https://doi.org/10.1109/TCI.2025.3536078","url":null,"abstract":"Sparse-view computed tomography aims to reduce radiation exposure but often suffers from degraded image quality due to insufficient projection data. Traditional methods struggle to balance data fidelity and detail preservation, particularly in high-frequency regions. In this paper, we propose a Cascaded Frequency-Encoded Multi-Scale Neural Fields (Ca-FMNF) framework. We reformulate the reconstruction task as refining high-frequency residuals upon a high-quality low-frequency foundation. It integrates a pre-trained iterative unfolding network for initial low-frequency estimation with a FMNF to represent high-frequency residuals. The FMNF parameters are optimized by minimizing the discrepancy between the measured projections and those estimated through the imaging forward model, thereby refining the residuals based on the initial estimation. This dual-stage strategy enhances data consistency and preserves fine structures. The extensive experiments on simulated and clinical datasets demonstrate that our method achieves the optimal results in both quantitative metrics and visual quality, effectively reducing artifacts and preserving structural details.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"237-250"},"PeriodicalIF":4.2,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信