IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
HDD-Net: Haar Dual Domain Network for Ring Artifacts Correction HDD-Net:用于环伪影校正的Haar双域网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-04-01 DOI: 10.1109/TCI.2025.3551166
Xuelong Wu;Junsheng Wang;Qingjie Zhao
{"title":"HDD-Net: Haar Dual Domain Network for Ring Artifacts Correction","authors":"Xuelong Wu;Junsheng Wang;Qingjie Zhao","doi":"10.1109/TCI.2025.3551166","DOIUrl":"https://doi.org/10.1109/TCI.2025.3551166","url":null,"abstract":"Ring artifacts are common artifacts in X-ray Computed Tomography (XCT) scans and have a significant impact on subsequent feature/phase extractions due to the small grayscale gradients in XCT volume data of bulk materials. This paper proposes the Haar Dual Domain Network for correcting ring artifacts. By utilizing the Haar wavelet decomposition on images containing ring artifacts in both the image and projection domains, the ring artifacts are preliminarily separated, facilitating their removal by neural networks while preserving microstructure features such as low-contrast phase boundaries. By constructing a feature fusion network, the information from both 2D slices and 3D projection volume data has been fully integrated to eliminate ring artifacts while preserving the edges of every feature. The effectiveness of the Haar wavelet transform and fusion network has been validated by ablation experiments, proving the application of HDD-Net to large volume of XCT data.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"399-409"},"PeriodicalIF":4.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PACformer: A Multi-Stage Heterogeneous Convolutional-Vision Transformer for Sparse-View Photoacoustic Tomography Restoration PACformer:一种用于稀疏视场光声层析成像恢复的多级异构卷积视觉变压器
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-31 DOI: 10.1109/TCI.2025.3550716
Li He;Ruitao Chen;Xiangyu Liu;Xu Cao;Shouping Zhu;Yihan Wang
{"title":"PACformer: A Multi-Stage Heterogeneous Convolutional-Vision Transformer for Sparse-View Photoacoustic Tomography Restoration","authors":"Li He;Ruitao Chen;Xiangyu Liu;Xu Cao;Shouping Zhu;Yihan Wang","doi":"10.1109/TCI.2025.3550716","DOIUrl":"https://doi.org/10.1109/TCI.2025.3550716","url":null,"abstract":"Sparse sampling of photoacoustic (PA) signals is a crucial strategy for enhancing the feasibility of photoacoustic tomography (PAT) in clinical settings by reducing system complexity and costs. However, this approach often faces significant artifacts resulting from traditional reconstruction algorithms, underscoring the urgent need for effective solutions. To address the critical challenge of balancing computational efficiency with imaging quality, we introduce PACformer—a novel hybrid model that integrates convolutional neural networks (CNNs) with multi-head self-attentions (MSAs) to improve the reconstruction of sparse-view PAT images. While conventional CNNs excel at local feature extraction, they often struggle to capture long-range dependencies inherent in continuous structures and the diverse artifact patterns present in PAT images. PACformer tackles these limitations through a dual architecture that seamlessly combines MSAs with heterogeneous convolutional layers. Since feature representations differ in size and semantics at various stages of the deep model, PACformer employs specialized blocks for shallow and deep stages. Specifically, it utilizes efficient local convolutions and windowed MSAs for high-resolution feature maps, conditional convolutions (CondConv) integrated with MSAs for advanced feature representation in deeper stages, and Scale-Modulated Convolution combined with CondConv for the bottleneck stage. Experimental results on open-source datasets demonstrate PACformer's superior performance compared to traditional and state-of-the-art networks, validated through ablation studies and attention map visualizations. By effectively modeling both local and global artifacts, PACformer establishes itself as a robust solution for sparse-view PAT reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"377-388"},"PeriodicalIF":4.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Axial Super-Resolution in Optical Coherence Tomography Images via Spectrum-Based Self-Supervised Training 基于光谱自监督训练的光学相干层析成像轴向超分辨率研究
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-31 DOI: 10.1109/TCI.2025.3555134
Zhengyang Xu;Yuting Gao;Xi Chen;Kan Lin;Linbo Liu;Yu-Cheng Chen
{"title":"Axial Super-Resolution in Optical Coherence Tomography Images via Spectrum-Based Self-Supervised Training","authors":"Zhengyang Xu;Yuting Gao;Xi Chen;Kan Lin;Linbo Liu;Yu-Cheng Chen","doi":"10.1109/TCI.2025.3555134","DOIUrl":"https://doi.org/10.1109/TCI.2025.3555134","url":null,"abstract":"High axial resolution in Optical Coherence Tomography (OCT) images is essential for accurately diagnosing skin conditions like psoriasis and keratoderma, where clear boundary delineation can reveal early disease markers. Existing deep learning super-resolution methods typically rely on intensity-based training, which only utilizes magnitude data from the OCT spectrum after Fourier transformation, limiting the reconstruction of fine boundary details. This study introduces a spectrum-based, self-supervised deep learning framework that leverages OCT spectral (fringe) data to improve axial resolution beyond system limits. By training the model directly on fringe data in a self-supervised manner, we achieve finer structural detail recovery. Evaluation metrics included Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and axial resolution estimation. Our framework yielded a 50% improvement in axial resolution, achieving 4.28 μm from 7.19 μm, along with PSNR gains of up to 3.37 dB and SSIM increases by 0.157, significantly enhancing boundary continuity and fine detail reconstruction. Our method surpasses intensity-based approaches in enhancing axial resolution and presents potential for iterative application to achieve even greater improvements. Significance: This framework advances OCT imaging, offering a promising, non-invasive tool for dermatological diagnostics.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"497-505"},"PeriodicalIF":4.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Batch-FPM: Random Batch-Update Multi-Parameter Physical Fourier Ptychography Neural Network 批处理- fpm:随机批处理更新多参数物理傅立叶平面神经网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-29 DOI: 10.1109/TCI.2025.3574887
Ruiqing Sun;Delong Yang;Yiyan Su;Qun Hao;Shaohui Zhang
{"title":"Batch-FPM: Random Batch-Update Multi-Parameter Physical Fourier Ptychography Neural Network","authors":"Ruiqing Sun;Delong Yang;Yiyan Su;Qun Hao;Shaohui Zhang","doi":"10.1109/TCI.2025.3574887","DOIUrl":"https://doi.org/10.1109/TCI.2025.3574887","url":null,"abstract":"Fourier Ptychographic Microscopy (FPM) is a computational imaging technique that enables high-resolution imaging over a large field of view. However, its application in the biomedical field has been limited due to the long image reconstruction time and poor noise robustness. In this paper, we propose a fast and robust FPM reconstruction method based on physical neural networks with batch updated optimization strategies, capable of achieving attractive results with low single-to-noise ratio and correcting multiple system parameters simultaneously. Our method leverages a random batch optimization approach, breaks away from the fixed sequential iterative order and gives greater attention to high-frequency information. The proposed method has better convergence performance even for low signal-to-noise ratio data sets, such as low exposure time dark-field images with an exposure time equal to one percent of the normal. As a result, it can greatly increase the image recording and result reconstruction speed without any additional hardware modifications. By utilizing advanced deep learning optimizers and perform parallel computational scheme, our method enhances GPU computational efficiency, significantly reducing reconstruction costs. Experimental results demonstrate that our method achieves near real-time digital refocusing of a 1024 × 1024 pixels region of interest on consumer-grade GPUs. This approach significantly improves temporal resolution (by reducing the exposure time of dark-field images), noise resistance, and reconstruction speed, and therefore can efficiently promote the practical application of FPM in clinical diagnostics, digital pathology, and biomedical research, etc. In addition, we believe our algorithm scheme can help researchers quickly validate and implement FPM-related ideas.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"864-871"},"PeriodicalIF":4.2,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144557937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quick Unsupervised Hyperspectral Dimensionality Reduction for Earth Observation: A Comparison 对地观测快速无监督高光谱降维:比较
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-28 DOI: 10.1109/TCI.2025.3555137
Daniela Lupu;Joseph L. Garrett;Tor Arne Johansen;Milica Orlandic;Ion Necoara
{"title":"Quick Unsupervised Hyperspectral Dimensionality Reduction for Earth Observation: A Comparison","authors":"Daniela Lupu;Joseph L. Garrett;Tor Arne Johansen;Milica Orlandic;Ion Necoara","doi":"10.1109/TCI.2025.3555137","DOIUrl":"https://doi.org/10.1109/TCI.2025.3555137","url":null,"abstract":"Dimensionality reduction can be applied to hyperspectral images so that the most useful data can be extracted and processed more quickly. This is critical in any situation in which data volume exceeds the capacity of the computational resources, particularly in the case of remote sensing platforms (e.g., drones, satellites), but also in the case of multi-year datasets. Moreover, the computational strategies of unsupervised dimensionality reduction often provide the basis for more complicated supervised techniques. In this work, eight unsupervised dimensionality reduction algorithms are tested on hyperspectral data from the HYPSO-1 earth observation satellite. Each particular algorithm is chosen to be representative of a broader collection of methods. Our extensive experiments probe the computational complexity, reconstruction accuracy, signal clarity, sensitivity to artifacts, and effects on target detection and classification of the different algorithms. No algorithm consistently outperformed the others across all tests, but some general trends regarding the characteristics of the algorithms did emerge. With half a million pixels, computational time requirements of the methods varied by 5 orders of magnitude, and the reconstruction error varied by about 3 orders of magnitude. A relationship between mutual information and artifact susceptibility was suggested by the tests. The relative performance of the algorithms differed significantly between the target detection and classification tests. Overall, these experiments both show the power of dimensionality reduction and give guidance regarding how to evaluate a technique prior to incorporating it into a processing pipeline.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"520-535"},"PeriodicalIF":4.2,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Adaptive Plug-and-Play ADMM Framework for Short-Range 3-D SAR Imaging 用于近距离三维SAR成像的快速自适应即插即用ADMM框架
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-26 DOI: 10.1109/TCI.2025.3573587
The-Hien Pham;Ic-Pyo Hong
{"title":"Fast Adaptive Plug-and-Play ADMM Framework for Short-Range 3-D SAR Imaging","authors":"The-Hien Pham;Ic-Pyo Hong","doi":"10.1109/TCI.2025.3573587","DOIUrl":"https://doi.org/10.1109/TCI.2025.3573587","url":null,"abstract":"The advancement of short-range millimeter-wave (MMW) synthetic aperture radar (SAR) imaging has shown significant advancements in various fields, including security surveillance, non-destructive evaluation, and medical diagnostics. This paper presents a fast adaptive plug-and-play alternating direction method of multipliers (FA-PnP-ADMM) framework designed to improve the efficiency and accuracy of SAR image reconstruction. By addressing key challenges like image degradation caused by fast Fourier transform (FFT) operations and the computational burden of conventional ADMM methods, our framework significantly improves performance. Concretely, alongside a PnP strategy, the proposed FA-PnP-ADMM framework leverages the state-of-the-art single-frequency holographic (SFH) ADMM-based image-solving model and the adaptive parameter adjustment predicated on the relationship between relaxed ADMM and relaxed Douglas-Rachford splitting (DRS). This innovative integration significantly accelerates convergence and reduces computational overhead. Furthermore, the methodology incorporates proficient denoising deep learning (DL) architectures, encompassing convolutional neural network (CNN) and auto-encoder (AE), seamlessly embedded within the iterative process, resulting in a tailored PnP-DL-ADMM. This synergy not only enhances noise suppression and image fidelity but also adapts effectively to diverse scene complexities and noise levels. Unlike previous works that employ these techniques separately, our approach integrates adaptive optimization and DL-based denoisers into a unified framework optimized for short-range 3D SAR imaging. Experimental results demonstrate substantial improvements in both runtime and reconstruction quality, highlighting the practicality and impact of this methodology for real-world applications.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"764-778"},"PeriodicalIF":4.2,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Swap-Net: A Memory-Efficient 2.5D Network for Sparse-View 3D Cone Beam CT Reconstruction to ICF Applications Swap-Net:一种内存高效的2.5D网络,用于稀疏视图三维锥束CT重建到ICF应用
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-23 DOI: 10.1109/TCI.2025.3572699
Xiaojian Xu;Marc L. Klasky;Michael T. McCann;Jason Hu;Jeffrey A. Fessler
{"title":"Swap-Net: A Memory-Efficient 2.5D Network for Sparse-View 3D Cone Beam CT Reconstruction to ICF Applications","authors":"Xiaojian Xu;Marc L. Klasky;Michael T. McCann;Jason Hu;Jeffrey A. Fessler","doi":"10.1109/TCI.2025.3572699","DOIUrl":"https://doi.org/10.1109/TCI.2025.3572699","url":null,"abstract":"Reconstructing 3D cone beam computed tomography (CBCT) images from a limited set of projections is an important inverse problem in many imaging applications from medicine to Inertial Confinement Fusion (ICF). The performance of traditional methods such as filtered back projection (FBP) and model-based regularization is sub-optimal when the number of available projections is limited. In the past decade, deep learning (DL) has gained great popularity for solving CT inverse problems. A typical DL-based method for CBCT image reconstruction is to learn an end-to-end mapping by training a 2D or 3D network. However, 2D networks fail to fully use global information. While 3D networks are desirable, they become impractical as image sizes increase because of the high memory cost. This paper proposes Swap-Net, a memory-efficient 2.5D network for sparse-view 3D CBCT image reconstruction. Swap-Net uses a sequence of novel axes-swapping operations to reconstruct 3D volumes in an end-to-end fashion without using full 3D convolutions. Simulation results on ICF show that Swap-Net consistently outperforms baseline methods both quantitatively and qualitatively in terms of reducing artifacts and preserving details of complex hydrodynamic simulations of relevance to the ICF community.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"872-887"},"PeriodicalIF":4.2,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144557768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resolution Enhancement of Under-Sampled Photoacoustic Microscopy Images Using Neural Representation 利用神经表征增强欠采样光声显微镜图像的分辨率
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-21 DOI: 10.1109/TCI.2025.3565129
Youshen Xiao;Zhengyuan Zhang;Ruixi Sun;Yiling Shi;Sheng Liao;Fan Zhang;Yunhui Jiang;Xiyu Chen;Arunima Sharma;Manojit Pramanik;Yuyao Zhang;Fei Gao
{"title":"Resolution Enhancement of Under-Sampled Photoacoustic Microscopy Images Using Neural Representation","authors":"Youshen Xiao;Zhengyuan Zhang;Ruixi Sun;Yiling Shi;Sheng Liao;Fan Zhang;Yunhui Jiang;Xiyu Chen;Arunima Sharma;Manojit Pramanik;Yuyao Zhang;Fei Gao","doi":"10.1109/TCI.2025.3565129","DOIUrl":"https://doi.org/10.1109/TCI.2025.3565129","url":null,"abstract":"Acoustic-Resolution Photoacoustic Microscopy (AR-PAM) has demonstrated great potential in subcutaneous vascular imaging. However, its spatial resolution is limited by the system's Point Spread Function (PSF). To enhance resolution, various deconvolution-based methods can be employed. Traditional deconvolution methods, such as Richardson-Lucy deconvolution and model-based deconvolution, typically use the PSF as prior knowledge to improve spatial resolution. However, accurately measuring the system's PSF is challenging, leading to the widespread adoption of low vision deconvolution methods, which often suffer from inaccurate deconvolution. Another major challenge of AR-PAM is the long scanning time. To accelerate image acquisition, downsampling can be applied to reduce scanning time. Subsequently, interpolation methods are commonly used to recover high-resolution images from the downsampled measurements. However, conventional interpolation methods struggle to achieve high-fidelity image recovery, particularly under high downsampling conditions. In this study, we propose a method based on Implicit Neural Representations (INR) to simultaneously address the challenges of unknown PSF and under-sampled image recovery. By leveraging INR, we learn a continuous mapping from spatial positions to initial acoustic pressure, effectively compensating for the discretization of the image space and enhancing the resolution of AR-PAM. Specifically, we treat the PSF as a learnable parameter to mitigate inaccuracies in PSF measurement. We qualitatively and quantitatively evaluated the proposed method on leaf vein data, mouse brain data, and real in vivo AR-PAM data, demonstrating superior performance compared to existing methods in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM).","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"678-688"},"PeriodicalIF":4.2,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144124030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unrolling Reweighted Total Variation-Based Split Bregman Iterative Framework for Electrical Impedance Tomography Image Reconstruction 电阻抗层析成像图像重建的展开重加权总变分分裂Bregman迭代框架
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-21 DOI: 10.1109/TCI.2025.3572286
Zichen Wang;Tao Zhang;Qi Wang
{"title":"Unrolling Reweighted Total Variation-Based Split Bregman Iterative Framework for Electrical Impedance Tomography Image Reconstruction","authors":"Zichen Wang;Tao Zhang;Qi Wang","doi":"10.1109/TCI.2025.3572286","DOIUrl":"https://doi.org/10.1109/TCI.2025.3572286","url":null,"abstract":"Electrical impedance tomography (EIT) is one of the typical ill-posed inverse problems, where serious ill-posedness and the linear approximation of the forward operator lead to obvious distortions and artifacts in the degraded reconstructions, further limiting its practical application. The learning-based strategies with image enhancement have been introduced into EIT reconstruction and also achieved improvements. Nevertheless, this idea ignores the priori knowledge of physical information, while not fully exploiting data consistency, resulting in poor generalization and interpretability. In this work, a reweighted Split Bregman (SB) iterative algorithm is proposed regularized by total variation firstly, referred to as RwTVSB. Moreover, the RwTVSB iteration is unrolled into a neural network-based learning framework, dubbed as RwTVSB-Net. The reweighted matrix is introduced to the SB iteration, which could overcome the loss of information of the forward operator due to the linear approximation and also enhance the constraints of the physical priori. Specifically, (1) a network based on residual connection and SE-attention is designed to update the reweighted matrix. (2) Further, a U-shaped architecture with deformable large kernel convolution, dilated convolution, and cross-attention is embedded into this unrolling framework to learn the soft threshold operator. This not only maintains consistency with the RwTVSB iterative algorithm but also uses multi-scale features to fusion information at multiple levels. Both simulated and real-world measured data are employed to validate the effectiveness and advantages of the proposed RwTVSB-Net. The visual reconstructions and quantitative metrics show that RwTVSB-Net outperforms other state-of-the-art methods. In addition, the robustness of the method is tested and validated on multiple imaging tasks.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"748-763"},"PeriodicalIF":4.2,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144196802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Low-Dose CT Reconstruction With One-Way Conditional Normalizing Flows 单向条件归一化流的无监督低剂量CT重建
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-19 DOI: 10.1109/TCI.2025.3553039
Ran An;Ke Chen;Hongwei Li
{"title":"Unsupervised Low-Dose CT Reconstruction With One-Way Conditional Normalizing Flows","authors":"Ran An;Ke Chen;Hongwei Li","doi":"10.1109/TCI.2025.3553039","DOIUrl":"https://doi.org/10.1109/TCI.2025.3553039","url":null,"abstract":"Deep-learning techniques have demonstrated significant potential in low-dose computed tomography (LDCT) reconstruction. Nevertheless, supervised methods are limited by the scarcity of labeled data in clinical scenarios, while CNN-based unsupervised denoising methods often result in excessive smoothing of reconstructed images. Although normalizing flows (NFs) based methods have shown promise in generating detail-rich images and avoiding over-smoothing, they face two key challenges: (1) Existing two-way transformation strategies between noisy images and latent variables, despite leveraging the regularization and generation capabilities of NFs, can lead to detail loss and secondary artifacts; and (2) Training NFs on high-resolution CT images is computationally intensive. While conditional normalizing flows (CNFs) can mitigate computational costs by learning conditional probabilities, current methods rely on labeled data for conditionalization, leaving unsupervised CNF-based LDCT reconstruction an unresolved challenge. To address these issues, we propose a novel unsupervised LDCT iterative reconstruction algorithm based on CNFs. Our approach implements a strict one-way transformation during alternating optimization in the dual spaces, effectively preventing detail loss and secondary artifacts. Additionally, we propose an unsupervised conditionalization strategy, enabling efficient training of CNFs on high-resolution CT images and achieving fast, high-quality unsupervised reconstruction. Experimental results across multiple datasets demonstrate that the proposed method outperforms several state-of-the-art unsupervised methods and even rivals some supervised approaches.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"485-496"},"PeriodicalIF":4.2,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信