IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
IEEE Signal Processing Society Information
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-08 DOI: 10.1109/TCI.2024.3525385
{"title":"IEEE Signal Processing Society Information","authors":"","doi":"10.1109/TCI.2024.3525385","DOIUrl":"https://doi.org/10.1109/TCI.2024.3525385","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"C2-C2"},"PeriodicalIF":4.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10833177","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
List of Reviewers
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-01-01 DOI: 10.1109/TCI.2024.3523050
{"title":"List of Reviewers","authors":"","doi":"10.1109/TCI.2024.3523050","DOIUrl":"https://doi.org/10.1109/TCI.2024.3523050","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1829-1831"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10819699","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ice-Tide: Implicit Cryo-ET Imaging and Deformation Estimation
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-12-23 DOI: 10.1109/TCI.2024.3519805
Valentin Debarnot;Vinith Kishore;Ricardo D. Righetto;Ivan Dokmanić
{"title":"Ice-Tide: Implicit Cryo-ET Imaging and Deformation Estimation","authors":"Valentin Debarnot;Vinith Kishore;Ricardo D. Righetto;Ivan Dokmanić","doi":"10.1109/TCI.2024.3519805","DOIUrl":"https://doi.org/10.1109/TCI.2024.3519805","url":null,"abstract":"We introduce ICE-TIDE, a method for cryogenic electron tomography (cryo-ET) that simultaneously aligns observations and reconstructs a high-resolution volume. The alignment of tilt series in cryo-ET is a major problem limiting the resolution of reconstructions. ICE-TIDE relies on an efficient coordinate-based implicit neural representation of the volume which enables it to directly parameterize deformations and align the projections. Furthermore, the implicit network acts as an effective regularizer, allowing for high-quality reconstruction at low signal-to-noise ratios as well as partially restoring the missing wedge information. We compare the performance of ICE-TIDE to existing approaches on realistic simulated volumes where the significant gains in resolution and accuracy of recovering deformations can be precisely evaluated. Finally, we demonstrate ICE-TIDE's ability to perform on experimental data sets.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"24-35"},"PeriodicalIF":4.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-12-16 DOI: 10.1109/TCI.2024.3518210
Yu Luo;Yue Cai;Jie Ling;Yingdan Ji;Yanmei Tie;Shun Yao
{"title":"Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction","authors":"Yu Luo;Yue Cai;Jie Ling;Yingdan Ji;Yanmei Tie;Shun Yao","doi":"10.1109/TCI.2024.3518210","DOIUrl":"https://doi.org/10.1109/TCI.2024.3518210","url":null,"abstract":"Magnetic Resonance Imaging (MRI) is a widely used imaging technique, however it has the limitation of long scanning time. Though previous model-based and learning-based MRI reconstruction methods have shown promising performance, most of them have not fully utilized the edge prior of MR images, and there is still much room for improvement. In this paper, we build a joint edge optimization model that not only incorporates individual regularizers specific to both the MR image and the edges, but also enforces a co-regularizer to effectively establish a stronger correlation between them. Specifically, the edge information is defined through a non-edge probability map to guide the image reconstruction during the optimization process. Meanwhile, the regularizers pertaining to images and edges are incorporated into a deep unfolding network to automatically learn their respective inherent a-priori information. Numerical experiments, consisting of multi-coil and single-coil MRI data with different sampling schemes at a variety of sampling factors, demonstrate that the proposed method outperforms other state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"11-23"},"PeriodicalIF":4.2,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FgC2F-UDiff: Frequency-Guided and Coarse-to-Fine Unified Diffusion Model for Multi-Modality Missing MRI Synthesis
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-12-12 DOI: 10.1109/TCI.2024.3516574
Xiaojiao Xiao;Qinmin Vivian Hu;Guanghui Wang
{"title":"FgC2F-UDiff: Frequency-Guided and Coarse-to-Fine Unified Diffusion Model for Multi-Modality Missing MRI Synthesis","authors":"Xiaojiao Xiao;Qinmin Vivian Hu;Guanghui Wang","doi":"10.1109/TCI.2024.3516574","DOIUrl":"https://doi.org/10.1109/TCI.2024.3516574","url":null,"abstract":"Multi-modality magnetic resonance imaging (MRI) is essential for the diagnosis and treatment of brain tumors. However, missing modalities are commonly observed due to limitations in scan time, scan corruption, artifacts, motion, and contrast agent intolerance. Synthesis of missing MRI has been a means to address the limitations of modality insufficiency in clinical practice and research. However, there are still some challenges, such as poor generalization, inaccurate non-linear mapping, and slow processing speeds. To address the aforementioned issues, we propose a novel unified synthesis model, the \u0000<bold>Frequency-guided and Coarse-to-fine Unified Diffusion Model (FgC2F-UDiff)</b>\u0000, designed for multiple inputs and outputs. Specifically, the Coarse-to-fine Unified Network (CUN) fully exploits the iterative denoising properties of diffusion models, from global to detail, by dividing the denoising process into two stages—coarse and fine—to enhance the fidelity of synthesized images. Secondly, the Frequency-guided Collaborative Strategy (FCS) harnesses appropriate frequency information as prior knowledge to guide the learning of a unified, highly non-linear mapping. Thirdly, the Specific-acceleration Hybrid Mechanism (SHM) integrates specific mechanisms to accelerate the diffusion model and enhance the feasibility of many-to-many synthesis. Extensive experimental evaluations have demonstrated that our proposed FgC2F-UDiff model achieves superior performance on two datasets, validated through a comprehensive assessment that includes both qualitative observations and quantitative metrics, such as PSNR SSIM, LPIPS, and FID.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1815-1828"},"PeriodicalIF":4.2,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Stage Progressive Network for Hyperspectral Image Demosaicing and Denoising
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-12-11 DOI: 10.1109/TCI.2024.3515844
Zhangxi Xiong;Wei Li;Hanzheng Wang;Baochang Zhang;James E. Fowler
{"title":"A Multi-Stage Progressive Network for Hyperspectral Image Demosaicing and Denoising","authors":"Zhangxi Xiong;Wei Li;Hanzheng Wang;Baochang Zhang;James E. Fowler","doi":"10.1109/TCI.2024.3515844","DOIUrl":"https://doi.org/10.1109/TCI.2024.3515844","url":null,"abstract":"While snapshot hyperspectral cameras are cheaper and faster than imagers based on pushbroom or whiskbroom spatial scanning, the output imagery from a snapshot camera typically has different spectral bands mapped to different spatial locations in a mosaic pattern, requiring a demosaicing process to be applied to generate the desired hyperspectral image with full spatial and spectral resolution. However, many existing demosaicing algorithms suffer common artifacts such as periodic striping or other forms of noise. To ameliorate these issues, a hyperspectral demosaicing framework that couples a preliminary demosaicing network with a separate multi-stage progressive denoising network is proposed, with both networks employing transformer and attention mechanisms. A multi-term loss function permits supervised network training to monitor not only performance of the preliminary demosaicing but also denoising at each stage. An extensive collection of experimental results demonstrate that the proposed approach produces demosaiced images with not only fewer visual artifacts but also improved performance with respect to several quantitative measures as compared to other state-of-the-art demosaicing methods from recent literature.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1-10"},"PeriodicalIF":4.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FoveaSPAD: Exploiting Depth Priors for Adaptive and Efficient Single-Photon 3D Imaging
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-12-09 DOI: 10.1109/TCI.2024.3503360
Justin Folden;Atul Ingle;Sanjeev J. Koppal
{"title":"FoveaSPAD: Exploiting Depth Priors for Adaptive and Efficient Single-Photon 3D Imaging","authors":"Justin Folden;Atul Ingle;Sanjeev J. Koppal","doi":"10.1109/TCI.2024.3503360","DOIUrl":"https://doi.org/10.1109/TCI.2024.3503360","url":null,"abstract":"Fast, efficient, and accurate depth-sensing is important for safety-critical applications such as autonomous vehicles. Direct time-of-flight LiDAR has the potential to fulfill these demands, thanks to its ability to provide high-precision depth measurements at long standoff distances. While conventional LiDAR relies on avalanche photodiodes (APDs), single-photon avalanche diodes (SPADs) are an emerging image-sensing technology that offer many advantages such as extreme sensitivity and time resolution. In this paper, we remove the key challenges to widespread adoption of SPAD-based LiDARs: their susceptibility to ambient light and the large amount of raw photon data that must be processed to obtain in-pixel depth estimates. We propose new algorithms and sensing policies that improve signal-to-noise ratio (SNR) and increase computing and memory efficiency for SPAD-based LiDARs. During capture, we use external signals to \u0000<italic>foveate</i>\u0000, i.e., guide how the SPAD system estimates scene depths. This foveated approach allows our method to “zoom into” the signal of interest, reducing the amount of raw photon data that needs to be stored and transferred from the SPAD sensor, while also improving resilience to ambient light. We show results both in simulation and also with real hardware emulation, with specific implementations achieving a 1548-fold reduction in memory usage, and our algorithms can be applied to newly available and future SPAD arrays.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1722-1735"},"PeriodicalIF":4.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Light Field Angular Super-Resolution Network Based on Convolutional Transformer and Deep Deblurring
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-12-09 DOI: 10.1109/TCI.2024.3507634
Deyang Liu;Yifan Mao;Yifan Zuo;Ping An;Yuming Fang
{"title":"Light Field Angular Super-Resolution Network Based on Convolutional Transformer and Deep Deblurring","authors":"Deyang Liu;Yifan Mao;Yifan Zuo;Ping An;Yuming Fang","doi":"10.1109/TCI.2024.3507634","DOIUrl":"https://doi.org/10.1109/TCI.2024.3507634","url":null,"abstract":"Many Light Field (LF) angular super-resolution methods have been proposed to cope with the LF spatial and angular resolution trade-off problem. However, most existing methods cannot simultaneously explore LF local and non-local geometric information, which limits their performances. Moreover, since the quality degradation model of the reconstructed dense LF is always neglected, most solutions fail to effectively suppress the blurry edges and artifacts. To overcome these limitations, this paper proposes an LF angular super-resolution network based on convolutional Transformer and deep deblurring. The proposed method mainly comprises a Global-Local coupled Convolutional Transformer Network (GLCTNet), a Deep Deblurring Network (DDNet), and a Texture-aware feature Fusion Network (TFNet). The GLCTNet can fully capture the long-range dependencies while strengthening the locality of each view. The DDNet is utilized to construct the quality degradation model of the reconstructed dense LF to suppress the introduced blurred edges and artifacts. The TFNet distills the texture features by extracting the local binary pattern map and gradient map, and allows a sufficient interaction of the obtained non-local geometric information, local structural information, and texture information for LF angular super-resolution. Comprehensive experiments demonstrate the superiority of our proposed method in various LF angular super-resolution tasks. The depth estimation application further verifies the effectiveness of our method in generating high-quality dense LF.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1736-1748"},"PeriodicalIF":4.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Rank Angular Prior Guided Multi-Diffusion Model for Few-Shot Low-Dose CT Reconstruction 用于少镜头低剂量 CT 重建的低秩角度先验引导多扩散模型
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-12-04 DOI: 10.1109/TCI.2024.3503366
Wenhao Zhang;Bin Huang;Shuyue Chen;Xiaoling Xu;Weiwen Wu;Qiegen Liu
{"title":"Low-Rank Angular Prior Guided Multi-Diffusion Model for Few-Shot Low-Dose CT Reconstruction","authors":"Wenhao Zhang;Bin Huang;Shuyue Chen;Xiaoling Xu;Weiwen Wu;Qiegen Liu","doi":"10.1109/TCI.2024.3503366","DOIUrl":"https://doi.org/10.1109/TCI.2024.3503366","url":null,"abstract":"Low-dose computed tomography (LDCT) is essential in clinical settings to minimize radiation exposure; however, reducing the dose often leads to a significant decline in image quality. Additionally, conventional deep learning approaches typically require large datasets, raising concerns about privacy, costs, and time constraints. To address these challenges, a few-shot low-dose CT reconstruction method is proposed, utilizing low-Rank Angular Prior (RAP) multi-diffusion model. In the prior learning phase, projection data is transformed into multiple consecutive views organized by angular segmentation, allowing for the extraction of rich prior information through low-rank processing. This structured approach enhances the learning capacity of the multi-diffusion model. During the iterative reconstruction phase, a stochastic differential equation solver is employed alongside data consistency constraints to iteratively refine the acquired projection data. Furthermore, penalized weighted least-squares and total variation techniques are integrated to improve image quality. Results demonstrate that the reconstructed images closely resemble those obtained from normal-dose CT, validating the RAP model as an effective and practical solution for artifact and noise reduction while preserving image fidelity in low-dose situation.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1763-1774"},"PeriodicalIF":4.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVMS-RCN: A Dual-Domain Unified CT Reconstruction With Multi-Sparse-View and Multi-Scale Refinement-Correction
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-11-27 DOI: 10.1109/TCI.2024.3507645
Xiaohong Fan;Ke Chen;Huaming Yi;Yin Yang;Jianping Zhang
{"title":"MVMS-RCN: A Dual-Domain Unified CT Reconstruction With Multi-Sparse-View and Multi-Scale Refinement-Correction","authors":"Xiaohong Fan;Ke Chen;Huaming Yi;Yin Yang;Jianping Zhang","doi":"10.1109/TCI.2024.3507645","DOIUrl":"https://doi.org/10.1109/TCI.2024.3507645","url":null,"abstract":"X-ray Computed Tomography (CT) is one of the most important diagnostic imaging techniques in clinical applications. Sparse-view CT imaging reduces the number of projection views to a lower radiation dose and alleviates the potential risk of radiation exposure. Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods: 1) do not fully use the projection data; 2) do not always link their architecture designs to a mathematical theory; 3) do not flexibly deal with multi-sparse-view reconstruction assignments. This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view CT reconstructions. We propose a novel dual-domain unified framework that offers a great deal of flexibility for multi-sparse-view CT reconstruction through a single model. This framework combines the theoretical advantages of model-based methods with the superior reconstruction performance of DL-based methods, resulting in the expected generalizability of DL. We propose a refinement module that utilizes unfolding projection domain to refine full-sparse-view projection errors, as well as an image domain correction module that distills multi-scale geometric error corrections to reconstruct sparse-view CT. This provides us with a new way to explore the potential of projection information and a new perspective on designing network architectures. The multi-scale geometric correction module is end-to-end learnable, and our method could function as a plug-and-play reconstruction technique, adaptable to various applications. Extensive experiments demonstrate that our framework is superior to other existing state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1749-1762"},"PeriodicalIF":4.2,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信