IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
Full Matrix Wavefield Migration for Layered Photoacoustic Imaging
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-17 DOI: 10.1109/TCI.2025.3530256
Kang Qin;Meng Cao;Peng Ren;Fengchen Luo;Siyu Liu
{"title":"Full Matrix Wavefield Migration for Layered Photoacoustic Imaging","authors":"Kang Qin;Meng Cao;Peng Ren;Fengchen Luo;Siyu Liu","doi":"10.1109/TCI.2025.3530256","DOIUrl":"https://doi.org/10.1109/TCI.2025.3530256","url":null,"abstract":"Medium heterogeneity poses a severe challenge to image reconstruction in transcranial photoacoustic tomography, which cannot be fully addressed by the homogeneous phase shift migration method. Although the existing methods can enhancethe imaging quality to a certain extent, they are limited by the large approximation errors and low computational efficiency. To further improve imaging performance and calculation speed, this paper proposes full matrix wavefield migration, which takes into account both lateral and longitudinal variations of speed of sound (SOS). Unlike the PSM method which relies on a layer-by-layer migration framework, the proposed approach reformulates the SOS map across the propagation medium into a spatial matrix of SOS. By means of extrapolating wavefield data in the wavenumber domain and correcting phase deviations in the spatial domain, this method reduces the image distortion caused by SOS irregularity and suppresses artifacts in reconstructed images. Moreover, the calculation process is further optimized to eliminate redundancy. Simulation and experimental results demonstrate that full matrix wavefield migration method improves lateral resolution (up to 21.24%) and computational efficiency (about 19.84%) compared to the previous methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"179-188"},"PeriodicalIF":4.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast-Aware Network With Aggregated-Interacted Transformer and Multi-Granularity Aligned Contrastive Learning for Synthesizing Contrast-Enhanced Abdomen CT Imaging
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-10 DOI: 10.1109/TCI.2025.3540711
Qikui Zhu;Andrew L. Wentland;Shuo Li
{"title":"Contrast-Aware Network With Aggregated-Interacted Transformer and Multi-Granularity Aligned Contrastive Learning for Synthesizing Contrast-Enhanced Abdomen CT Imaging","authors":"Qikui Zhu;Andrew L. Wentland;Shuo Li","doi":"10.1109/TCI.2025.3540711","DOIUrl":"https://doi.org/10.1109/TCI.2025.3540711","url":null,"abstract":"Contrast-enhanced CT imaging (CECTI) is crucial for the diagnosis of patients with liver tumors. Therefore, if CECTI can be synthesized using only non-contrast CT imaging (NCCTI), it will provide significant clinical advantages. We propose a novel contrast-aware network with Aggregated-interacted Transformer and Multi-granularity aligned contrastive learning (AMNet) for CECTI synthesizing, which enables synthesizing CECTI for the first time. AMNet mitigates the challenges associated with high-risk, time-consuming, expensive, and radiation-intensive procedures required for obtaining CECTI. Furthermore, it overcomes the challenges of low contrast and low sensitivity in CT imaging through four key innovations to address these challenges: 1) The Aggregated-Interacted Transformer (AI-Transformer) introduces two mechanisms: multi-scale token aggregation and cross-token interaction. These enable long-range dependencies between multi-scale cross-tokens, facilitating the extraction of discriminative structural and content features of tissues, thereby addressing the low-contrast challenge. 2) The Multi-granularity Aligned Contrastive Learning (MACL) constructs a new regularization term for exploiting intra-domain compact and inter-domain separable features to improve the model's sensitivity to chemical contrast agents (CAs) and overcome the low sensitivity challenge. 3) The Contrast-Aware Adaptive Layer (CAL) imbues the AMNet with contrast-aware abilities that adaptively adjust the contrast information of various regions to achieve perfect synthesis. 4) The dual-stream discriminator (DSD) adopts an ensemble strategy to evaluate the synthetic CECTI from multiple perspectives. AMNet is validated using two corresponding CT imaging modalities (pre-contrast and portal venous-phase), an essential procedure for liver tumor biopsy. Experimental results demonstrate that our AMNet has successfully synthesized CECTI without chemical CA injections for the first time.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"277-288"},"PeriodicalIF":4.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Static CT With Sources and Detectors Distributed in a Multi-Segment Manner: System Analysis and Analytical Reconstruction
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-10 DOI: 10.1109/TCI.2025.3540707
Changyu Chen;Yuxiang Xing;Li Zhang;Zhiqiang Chen
{"title":"Static CT With Sources and Detectors Distributed in a Multi-Segment Manner: System Analysis and Analytical Reconstruction","authors":"Changyu Chen;Yuxiang Xing;Li Zhang;Zhiqiang Chen","doi":"10.1109/TCI.2025.3540707","DOIUrl":"https://doi.org/10.1109/TCI.2025.3540707","url":null,"abstract":"In this work, we investigate the feature of projection sampling and analytical reconstruction algorithms for a Static CT with sources and detectors distributed in a Multi-Segment manner (MS-StaticCT). MS-StaticCT is a generalized configuration of previous static linear CT systems offering enhanced design flexibility and utilization efficiency in both X-ray source and detector components. By analyzing the imaging geometry of single-segment source and detector pairs, we delved into the Radon space properties of MS-StaticCT and proposed a data sufficiency condition for system design. To explore the impact of the unique sampling characteristics of MS-StaticCT on reconstruction quality, we derived analytical algorithms under two popular pipelines filtered-backprojection (MS-FBP) and differentiated backprojection filtration (MS-DBF), and assessed their performance. Due to the non-uniform sampling and singular points between segments, the global filtration process of MS-FBP requires local rebinning. The local nature of differentiation enables convenient filtration without rebinning. Besides, to address insufficient data caused by optical obstruction by sources and detectors, we incorporated multiple imaging planes and designed a generalized weighting function that efficiently utilizes conjugate projections. Simulation studies on numerical phantoms and clinical CT data demonstrate the feasibility of MS-StaticCT and the proposed reconstruction algorithms. The results highlighted MS-DBF's superiority in accuracy and spatial resolution for multi-segment geometries without compromising noise performance compared to MS-FBP whose performance depends on the number of detector segments involved for each focal spot. Our study provides a comprehensive understanding of the essential data structure and basic reconstruction tailored for systems characterized by linear source trajectories and detectors.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"251-264"},"PeriodicalIF":4.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143563998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GroupCDL: Interpretable Denoising and Compressed Sensing MRI via Learned Group-Sparsity and Circulant Attention
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-05 DOI: 10.1109/TCI.2025.3539021
Nikola Janjušević;Amirhossein Khalilian-Gourtani;Adeen Flinker;Li Feng;Yao Wang
{"title":"GroupCDL: Interpretable Denoising and Compressed Sensing MRI via Learned Group-Sparsity and Circulant Attention","authors":"Nikola Janjušević;Amirhossein Khalilian-Gourtani;Adeen Flinker;Li Feng;Yao Wang","doi":"10.1109/TCI.2025.3539021","DOIUrl":"https://doi.org/10.1109/TCI.2025.3539021","url":null,"abstract":"Nonlocal self-similarity within images has become an increasingly popular prior in deep-learning models. Despite their successful image restoration performance, such models remain largely uninterpretable due to their black-box construction. Our previous studies have shown that interpretable construction of a fully convolutional denoiser (CDLNet), with performance on par with state-of-the-art black-box counterparts, is achievable by unrolling a convolutional dictionary learning algorithm. In this manuscript, we seek an interpretable construction of a convolutional network with a nonlocal self-similarity prior that performs on par with black-box nonlocal models. We show that such an architecture can be effectively achieved by upgrading the <inline-formula><tex-math>$ell _{1}$</tex-math></inline-formula> sparsity prior (soft-thresholding) of CDLNet to an image-adaptive group-sparsity prior (group-thresholding). The proposed learned group-thresholding makes use of nonlocal attention to perform spatially varying soft-thresholding on the latent representation. To enable effective training and inference on large images with global artifacts, we propose a novel <italic>circulant-sparse attention</i>. We achieve competitive natural-image denoising performance compared to black-box nonlocal DNNs and transformers. The interpretable construction of our network allows for a straightforward extension to Compressed Sensing MRI (CS-MRI), yielding state-of-the-art performance. Lastly, we show robustness to noise-level mismatches between training and inference for denoising and CS-MRI reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"201-212"},"PeriodicalIF":4.2,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascaded Frequency-Encoded Multi-Scale Neural Fields for Sparse-View CT Reconstruction
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-31 DOI: 10.1109/TCI.2025.3536078
Jia Wu;Jinzhao Lin;Yu Pang;Xiaoming Jiang;Xinwei Li;Hongying Meng;Yamei Luo;Lu Yang;Zhangyong Li
{"title":"Cascaded Frequency-Encoded Multi-Scale Neural Fields for Sparse-View CT Reconstruction","authors":"Jia Wu;Jinzhao Lin;Yu Pang;Xiaoming Jiang;Xinwei Li;Hongying Meng;Yamei Luo;Lu Yang;Zhangyong Li","doi":"10.1109/TCI.2025.3536078","DOIUrl":"https://doi.org/10.1109/TCI.2025.3536078","url":null,"abstract":"Sparse-view computed tomography aims to reduce radiation exposure but often suffers from degraded image quality due to insufficient projection data. Traditional methods struggle to balance data fidelity and detail preservation, particularly in high-frequency regions. In this paper, we propose a Cascaded Frequency-Encoded Multi-Scale Neural Fields (Ca-FMNF) framework. We reformulate the reconstruction task as refining high-frequency residuals upon a high-quality low-frequency foundation. It integrates a pre-trained iterative unfolding network for initial low-frequency estimation with a FMNF to represent high-frequency residuals. The FMNF parameters are optimized by minimizing the discrepancy between the measured projections and those estimated through the imaging forward model, thereby refining the residuals based on the initial estimation. This dual-stage strategy enhances data consistency and preserves fine structures. The extensive experiments on simulated and clinical datasets demonstrate that our method achieves the optimal results in both quantitative metrics and visual quality, effectively reducing artifacts and preserving structural details.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"237-250"},"PeriodicalIF":4.2,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Comparison and Validation of Point Spread Functions for Optical Microscopes
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-29 DOI: 10.1109/TCI.2025.3536106
Zicheng Liu;Yingying Qin;Jean-Claude Tinguely;Krishna Agarwal
{"title":"Computational Comparison and Validation of Point Spread Functions for Optical Microscopes","authors":"Zicheng Liu;Yingying Qin;Jean-Claude Tinguely;Krishna Agarwal","doi":"10.1109/TCI.2025.3536106","DOIUrl":"https://doi.org/10.1109/TCI.2025.3536106","url":null,"abstract":"Point spread function (PSF) is quite important in modern computational microscopy techniques. Various approaches for measuring and modeling point spread functions have been proposed for both fluorescence and label-free microscopes. Among the various PSF candidates, it is often difficult to evaluate which PSF best suits the microscope and the experimental conditions. Visual qualification is often applied because there are hardly any techniques to quantify the quality of PSF as a basis for comparing different candidates and selecting the best one. To address this gap, we present a validation scheme based on the concept of confidence interval to evaluate the quality of fit of the PSF. This scheme is rigorous and supports precise validation for any microscope's PSF irrespective of their complexity, improving the performance of computational nanoscopy on them. We first demonstrate proof-of-principle of our scheme for a complex but practical label-free coherent imaging setup by comparing a variety of scalar and dyadic PSFs. Next, we validate our approach on conventional scalar PSFs using fluorescence based single molecule localization microscopy which needs PSF to compute the locations of single molecules. Lastly, we demonstrate how the scheme can be used in practice for challenging scenarios using images of gold nanorods placed on and illuminated by a photonic chip waveguide imaged using a label-free dark-field microscopy setup. Through these experiments, we demonstrate the generality and versatility of our PSF validation approach for the microscopy domain.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"170-178"},"PeriodicalIF":4.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10857452","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iterative Approach to Reconstructing Neural Disparity Fields From Light-Field Data
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-29 DOI: 10.1109/TCI.2025.3536098
Ligen Shi;Chang Liu;Xing Zhao;Jun Qiu
{"title":"Iterative Approach to Reconstructing Neural Disparity Fields From Light-Field Data","authors":"Ligen Shi;Chang Liu;Xing Zhao;Jun Qiu","doi":"10.1109/TCI.2025.3536098","DOIUrl":"https://doi.org/10.1109/TCI.2025.3536098","url":null,"abstract":"This study proposes a neural disparity field (NDF) that establishes an implicit, continuous representation of scene disparity based on a neural field and an iterative approach to address the inverse problem of NDF reconstruction from light-field (LF) data. NDF enables seamless and precise characterization of disparity variations in three-dimensional scenes and can discretize disparity at any arbitrary resolution, overcoming the limitations of traditional disparity maps that are prone to sampling errors and interpolation inaccuracies. The proposed NDF network architecture utilizes hash encoding combined with multilayer perceptrons (MLPs) to capture detailed disparities in texture levels, thereby enhancing its ability to represent the geometric information of complex scenes. By leveraging the spatial-angular consistency inherent in the LF data, a differentiable forward model to generate a central view image from the LF data is developed. Based on the forward model, an optimization scheme for the inverse problem of NDF reconstruction using differentiable propagation operators is established. Furthermore, an iterative solution method is adopted to reconstruct the NDF in the optimization scheme, which does not require training datasets and applies to LF data captured by various acquisition methods. Experimental results demonstrate that the proposed method can reconstruct high-quality NDF from LF data. The high-resolution disparity can be effectively recovered by NDF, demonstrating its capability for the implicit, continuous representation of scene disparities.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"410-420"},"PeriodicalIF":4.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Looking Around Flatland: End-to-End 2D Real-Time NLOS Imaging
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-29 DOI: 10.1109/TCI.2025.3536092
María Peña;Diego Gutierrez;Julio Marco
{"title":"Looking Around Flatland: End-to-End 2D Real-Time NLOS Imaging","authors":"María Peña;Diego Gutierrez;Julio Marco","doi":"10.1109/TCI.2025.3536092","DOIUrl":"https://doi.org/10.1109/TCI.2025.3536092","url":null,"abstract":"Time-gated non-line-of-sight (NLOS) imaging methods reconstruct scenes hidden around a corner by inverting the optical path of indirect photons measured at visible surfaces. These methods are, however, hindered by intricate, time-consuming calibration processes involving expensive capture hardware. Simulation of transient light transport in synthetic 3D scenes has become a powerful but computationally-intensive alternative for analysis and benchmarking of NLOS imaging methods. NLOS imaging methods also suffer from high computational complexity. In our work, we rely on dimensionality reduction to provide a real-time simulation framework for NLOS imaging performance analysis. We extend steady-state light transport in self-contained 2D worlds to take into account the propagation of time-resolved illumination by reformulating the transient path integral in 2D. We couple it with the recent phasor-field formulation of NLOS imaging to provide an end-to-end simulation and imaging pipeline that incorporates different NLOS imaging camera models. Our pipeline yields real-time NLOS images and progressive refinement of light transport simulations. We allow comprehensive control on a wide set of scene, rendering, and NLOS imaging parameters, providing effective real-time analysis of their impact on reconstruction quality. We illustrate the effectiveness of our pipeline by validating 2D counterparts of existing 3D NLOS imaging experiments, and provide an extensive analysis of imaging performance including a wider set of NLOS imaging conditions, such as filtering, reflectance, and geometric features in NLOS imaging setups.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"189-200"},"PeriodicalIF":4.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10857386","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Bidirectional Feature Enhancement Network for Continuous Space-Time Video Super-Resolution
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-29 DOI: 10.1109/TCI.2025.3531717
Laigan Luo;Benshun Yi;Zhongyuan Wang;Zheng He;Chao Zhu
{"title":"Dual Bidirectional Feature Enhancement Network for Continuous Space-Time Video Super-Resolution","authors":"Laigan Luo;Benshun Yi;Zhongyuan Wang;Zheng He;Chao Zhu","doi":"10.1109/TCI.2025.3531717","DOIUrl":"https://doi.org/10.1109/TCI.2025.3531717","url":null,"abstract":"Space-time video super-resolution aims to reconstruct the high-frame-rate and high-resolution video from the corresponding low-frame-rate and low-resolution counterpart. Currently, the task faces the challenge of efficiently extracting long-range temporal information from available frames. Meanwhile, existing methods can only produce results for a specific moment and cannot interpolate high-resolution frames for consecutive time stamps. To address these issues, we propose a multi-stage feature enhancement method that better utilizes the limited spatio-temporal information subject to the efficiency constraint. Our approach involves a pre-alignment module that extracts coarse aligned features from the adjacent odd-numbered frames in the first stage. In the second stage, we use a bidirectional recurrent module to refine the aligned features by exploiting the long-range information from all input frames while simultaneously performing video frame interpolation. The proposed video frame interpolation module concatenates temporal information with spatial features to achieve continuous interpolation, which refines the interpolated feature progressively and enhances the spatial information by utilizing the features of different scales. Extensive experiments on various benchmarks demonstrate that the proposed method outperforms state-of-the-art in both quantitative metrics and visual effects.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"228-236"},"PeriodicalIF":4.2,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143535473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clip-Driven Universal Model for Multi-Material Decomposition in Dual-Energy CT
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-20 DOI: 10.1109/TCI.2025.3531707
Xianghong Wang;Jiajun Xiang;Aihua Mao;Jiayi Xie;Peng Jin;Mingchao Ding;Yixuan Yuan;Yanye Lu;Lequan Yu;Hongmin Cai;Baiying Lei;Tianye Niu
{"title":"Clip-Driven Universal Model for Multi-Material Decomposition in Dual-Energy CT","authors":"Xianghong Wang;Jiajun Xiang;Aihua Mao;Jiayi Xie;Peng Jin;Mingchao Ding;Yixuan Yuan;Yanye Lu;Lequan Yu;Hongmin Cai;Baiying Lei;Tianye Niu","doi":"10.1109/TCI.2025.3531707","DOIUrl":"https://doi.org/10.1109/TCI.2025.3531707","url":null,"abstract":"Dual-energy computed tomography (DECT) offers quantitative insights and facilitates material decomposition, aiding in precise diagnosis and treatment planning. However, existing methods for material decomposition, often tailored to specific material types, need more generalizability and increase computational load with each additional material. We propose a CLIP-Driven Universal Model for adaptive Multi-Material Decomposition (MMD) to tackle this challenge. This model utilizes the semantic capabilities of text embeddings from Contrastive Language-Image Pre-training (CLIP), allowing a single network to manage structured feature embedding for multiple materials. A novel Siamese encoder and differential map fusion technique have also been integrated to enhance the decomposition accuracy while maintaining robustness across various conditions. Experiments on the simulated and physical patient studies have evidenced our model's superiority over traditional methods. Notably, it has significantly improved the Dice Similarity Coefficient—4.1%. These results underscore the potential of our network in clinical MMD applications, suggesting a promising avenue for enhancing DECT imaging analysis.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"349-361"},"PeriodicalIF":4.2,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信