IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
Transfer Learning for Data Fusion for Electromagnetic and Ultrasound Breast Imaging 电磁和超声乳房成像数据融合的迁移学习
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-28 DOI: 10.1109/TCI.2025.3541934
Valentin Noël;Thomas Rodet;Dominique Lesselier
{"title":"Transfer Learning for Data Fusion for Electromagnetic and Ultrasound Breast Imaging","authors":"Valentin Noël;Thomas Rodet;Dominique Lesselier","doi":"10.1109/TCI.2025.3541934","DOIUrl":"https://doi.org/10.1109/TCI.2025.3541934","url":null,"abstract":"Aiming at improved breast imaging, this contribution explores several scenarios for segmenting and estimating the distribution of electromagnetic (EM) and/or ultrasonic (US) parameters within breast tissue. A two-fold approach is adopted, leveraging Transfer Learning (TL) through Bayesian Neural Networks (BNN); the first objective is to consistently enhance imaging results, and the second is to establish a novel framework for data fusion transfer learning. The methodological approach is tailored for Artificial, Convolutional, and Bayesian Neural Networks, showcasing its effectiveness through the analysis of electromagnetic (EM) and ultrasonic (US) datasets computed in reliable scenarios, with a focus on heterogeneously dense and extremely dense breasts. Furthermore, a novel transfer learning Bayesian data fusion framework incorporating multi-frequency data exploits the complementary nature of EM low-resolution and US high-resolution imaging. By enhancing the fusion of EM and US data, this framework leads to better-contrasted zones in the images and is shown to outperform the most common transfer learning approaches.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"546-555"},"PeriodicalIF":4.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Robust and Generalizable Lensless Imaging With Modular Learned Reconstruction 利用模块化学习重建技术实现稳健、可通用的无透镜成像
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-28 DOI: 10.1109/TCI.2025.3539448
Eric Bezzam;Yohann Perron;Martin Vetterli
{"title":"Towards Robust and Generalizable Lensless Imaging With Modular Learned Reconstruction","authors":"Eric Bezzam;Yohann Perron;Martin Vetterli","doi":"10.1109/TCI.2025.3539448","DOIUrl":"https://doi.org/10.1109/TCI.2025.3539448","url":null,"abstract":"Lensless cameras disregard the conventional design that imaging should mimic the human eye. This is done by replacing the lens with a thin mask, and moving image formation to the digital post-processing. State-of-the-art lensless imaging techniques use learned approaches that combine physical modeling and neural networks. However, these approaches make simplifying modeling assumptions for ease of calibration and computation. Moreover, the generalizability of learned approaches to lensless measurements of new masks has not been studied. To this end, we utilize a modular learned reconstruction in which a key component is a pre-processor prior to image recovery. We theoretically demonstrate the pre-processor's necessity for standard image recovery techniques (Wiener filtering and iterative algorithms), and through extensive experiments show its effectiveness for multiple lensless imaging approaches and across datasets of different mask types (amplitude and phase). We also perform the first generalization benchmark across mask types to evaluate how well reconstructions trained with one system generalize to others. Our modular reconstruction enables us to use pre-trained components and transfer learning on new systems to cut down weeks of tedious measurements and training. As part of our work, we open-source four datasets, and software for measuring datasets and for training our modular reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"213-227"},"PeriodicalIF":4.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Single-Lens Controllable Depth-of-Field Imaging via Depth-Aware Point Spread Functions 基于深度感知点扩展函数的单镜头可控景深成像研究
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-28 DOI: 10.1109/TCI.2025.3544019
Xiaolong Qian;Qi Jiang;Yao Gao;Shaohua Gao;Zhonghua Yi;Lei Sun;Kai Wei;Haifeng Li;Kailun Yang;Kaiwei Wang;Jian Bai
{"title":"Towards Single-Lens Controllable Depth-of-Field Imaging via Depth-Aware Point Spread Functions","authors":"Xiaolong Qian;Qi Jiang;Yao Gao;Shaohua Gao;Zhonghua Yi;Lei Sun;Kai Wei;Haifeng Li;Kailun Yang;Kaiwei Wang;Jian Bai","doi":"10.1109/TCI.2025.3544019","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544019","url":null,"abstract":"Controllable Depth-of-Field (DoF) imaging commonly produces amazing visual effects based on heavy and expensive high-end lenses. However, confronted with the increasing demand for mobile scenarios, it is desirable to achieve a lightweight solution with Minimalist Optical Systems (MOS). This work centers around two major limitations of MOS, i.e., the severe optical aberrations and uncontrollable DoF, for achieving single-lens controllable DoF imaging via computational methods. A Depth-aware Controllable DoF Imaging (DCDI) framework is proposed equipped with All-in-Focus (AiF) aberration correction and monocular depth estimation, where the recovered image and corresponding depth map are utilized to produce imaging results under diverse DoFs of any high-end lens via patch-wise convolution. To address the depth-varying optical degradation, we introduce a Depth-aware Degradation-adaptive Training (DA<inline-formula> <tex-math>$^{2}$</tex-math></inline-formula>T) scheme. At the dataset level, a Depth-aware Aberration MOS (DAMOS) dataset is established based on the simulation of Point Spread Functions (PSFs) under different object distances. Additionally, we design two plug-and-play depth-aware mechanisms to embed depth information into the aberration image recovery for better tackling depth-aware degradation. Furthermore, we propose a storage-efficient Omni-Lens-Field model to represent the 4D PSF library of various lenses. With the predicted depth map, recovered image, and depth-aware PSF map inferred by Omni-Lens-Field, single-lens controllable DoF imaging is achieved. To the best of our knowledge, we are the first to explore the single-lens controllable DoF imaging solution. Comprehensive experimental results demonstrate that the proposed framework enhances the recovery performance, and attains impressive single-lens controllable DoF imaging results, providing a seminal baseline for this field.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"305-320"},"PeriodicalIF":4.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NLCMR: Indoor Depth Recovery Model With Non-Local Cross-Modality Prior 非局部交叉模态先验的室内深度恢复模型
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-27 DOI: 10.1109/TCI.2025.3545358
Junkang Zhang;Zhengkai Qi;Faming Fang;Tingting Wang;Guixu Zhang
{"title":"NLCMR: Indoor Depth Recovery Model With Non-Local Cross-Modality Prior","authors":"Junkang Zhang;Zhengkai Qi;Faming Fang;Tingting Wang;Guixu Zhang","doi":"10.1109/TCI.2025.3545358","DOIUrl":"https://doi.org/10.1109/TCI.2025.3545358","url":null,"abstract":"Recovering a dense depth image from sparse inputs is inherently challenging. Image-guided depth completion has become a prevalent technique, leveraging sparse depth data alongside RGB images to produce detailed depth maps. Although deep learning-based methods have achieved notable success, many state-of-the-art networks operate as black boxes, lacking transparent mechanisms for depth recovery. To address this, we introduce a novel model-guided depth recovery method. Our approach is built on a maximum a posterior (MAP) framework and features an optimization model that incorporates a non-local cross-modality regularizer and a deep image prior. The cross-modality regularizer capitalizes on the inherent correlations between depth and RGB images, enhancing the extraction of shared information. Additionally, the deep image prior captures local characteristics between the depth and RGB domains effectively. To counter the challenge of high heterogeneity leading to degenerate operators, we have integrated an implicit data consistency term into our model. Our model is then realized as a network using the half-quadratic splitting algorithm. Extensive evaluations on the NYU-Depth V2 and SUN RGB-D datasets demonstrate that our method performs competitively with current deep learning techniques.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"265-276"},"PeriodicalIF":4.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Quasi-Global Solutions to Compound Lens Based Computational Imaging Systems 探索基于复合透镜的计算成像系统的准全局解决方案
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-25 DOI: 10.1109/TCI.2025.3545357
Yao Gao;Qi Jiang;Shaohua Gao;Lei Sun;Kailun Yang;Kaiwei Wang
{"title":"Exploring Quasi-Global Solutions to Compound Lens Based Computational Imaging Systems","authors":"Yao Gao;Qi Jiang;Shaohua Gao;Lei Sun;Kailun Yang;Kaiwei Wang","doi":"10.1109/TCI.2025.3545357","DOIUrl":"https://doi.org/10.1109/TCI.2025.3545357","url":null,"abstract":"Recently, joint design approaches that simultaneously optimize optical systems and downstream algorithms through data-driven learning have demonstrated superior performance over traditional separate design approaches. However, current joint design approaches heavily rely on the manual identification of initial lenses, posing challenges and limitations, particularly for compound lens systems with multiple potential starting points. In this work, we present Quasi-Global Search Optics (QGSO) to automatically design compound lens based computational imaging systems through two parts: (i) Fused Optimization Method for Automatic Optical Design (OptiFusion), which searches for diverse initial optical systems under certain design specifications; and (ii) Efficient Physic-aware Joint Optimization (EPJO), which conducts parallel joint optimization of initial optical systems and image reconstruction networks with the consideration of physical constraints, culminating in the selection of the optimal solution in all search results. Extensive experimental results illustrate that QGSO serves as a transformative end-to-end lens design paradigm for superior global search ability, which automatically provides compound lens based computational imaging systems with higher imaging quality compared to existing paradigms.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"333-348"},"PeriodicalIF":4.2,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLBL-PU: Shadow-Based Layer-By-Layer Phase Unwrapping for Efficient 3D Measurement SLBL-PU:基于阴影的逐层相位展开,用于高效的3D测量
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-24 DOI: 10.1109/TCI.2025.3544084
Ruiming Yu;Hongshan Yu;Haiqiang Xu;Wei Sun;Naveed Akhtar;Yaonan Wang
{"title":"SLBL-PU: Shadow-Based Layer-By-Layer Phase Unwrapping for Efficient 3D Measurement","authors":"Ruiming Yu;Hongshan Yu;Haiqiang Xu;Wei Sun;Naveed Akhtar;Yaonan Wang","doi":"10.1109/TCI.2025.3544084","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544084","url":null,"abstract":"Phase-shifting (PS) based structured light technology shows excellent 3D perception performance. However, it requires projecting a extensive array of patterns, imposing constraints on the measurement space, or embedding additional signals for phase unwrapping (PU), leading to motion artifacts and low robustness. To surmount these challenges, we propose a shadow-based, layer-by-layer phase unwrapping (SLBL-PU) method, which enables absolute phase recovery for deep objects without the need for any supplementary patterns. In the initial phase, attention is focused on a novel truncation feature within the local phase, facilitating the use of iterative PUs to derive the modulated phase. Inspired by shading theory, in the second phase, the absolute phase is restored based on the geometric relationship between the imaging system and the object shadows. Additionally, by incorporating a time-division multiplexing strategy, the efficiency of 3D reconstruction in dynamic scenes is further tripled. In experiments involving different depths, phase modulation, complex colored, and dynamic scenes, the proposed method demonstrated superior performance. Specifically, in static environments (0 mm/s), the proposed approach yields greater measurement accuracy (0.020 mm and 0.195 mm) than does the traditional spatial domain modulation (PS) method. In dynamic environments (15 mm/s), the proposed approach theoretically utilizes at least three patterns, with a defect rate lower than that of the nine-pattern, three-frequency PS method (8.58% and 14.68%).","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"452-467"},"PeriodicalIF":4.2,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Single Pixel Imaging by Using Adaptive Jointly Optimized Conditional Diffusion 自适应联合优化条件扩散增强单像素成像
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-20 DOI: 10.1109/TCI.2025.3544087
Jiawei Dong;Hong Zeng;Sen Dong;Weining Chen;Qianxi Li;Jianzhong Cao;Qiurong Yan;Hao Wang
{"title":"Enhanced Single Pixel Imaging by Using Adaptive Jointly Optimized Conditional Diffusion","authors":"Jiawei Dong;Hong Zeng;Sen Dong;Weining Chen;Qianxi Li;Jianzhong Cao;Qiurong Yan;Hao Wang","doi":"10.1109/TCI.2025.3544087","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544087","url":null,"abstract":"Single-pixel imaging can reconstruct the original image at a low measurement rate (MR), and the target can be measured and reconstructed in low-light environments by capturing the light intensity information using a single-photon detector. Optimizing reconstruction results at low MR has become a focal point of research aimed at enhancing measurement efficiency. The application of neural network has significantly improved reconstruction quality, but the performance still requires further enhancement. In this paper, a Diffusion Single Pixel Imaging Model (DSPIM) method is proposed. The conditional diffusion model is utilized in the training and reconstruction processes of single-pixel imaging and is jointly optimized with an autoencoder network. This approach simulates the measurement and preliminary reconstruction of images, which are incorporated into the diffusion process as conditions. The noises and features are learned through a designed loss function that consists of predicted noise loss and measurement accuracy loss, allowing the reconstruction to perform well at very low MR. Besides, an adaptive regularization coefficients adjustment method (ARCA) has been designed for more effective optimization. Finally, the learned weights are loaded into the single photon counting system as a measurement matrix, demonstrating that the blurriness caused by insufficient features at low MR is effectively addressed using our methods, resulting in clearer targets and well-distinguished features.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"289-304"},"PeriodicalIF":4.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RLNet: Reshaping Learning Network for Accurate Low-Light Light Field Image Enhancement RLNet:用于精确弱光光场图像增强的重塑学习网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-20 DOI: 10.1109/TCI.2025.3544070
Xingzheng Wang;Jiahao Lin;Kaiqiang Chen;Wenhao Huang;Zixuan Wang;Yuanlong Deng
{"title":"RLNet: Reshaping Learning Network for Accurate Low-Light Light Field Image Enhancement","authors":"Xingzheng Wang;Jiahao Lin;Kaiqiang Chen;Wenhao Huang;Zixuan Wang;Yuanlong Deng","doi":"10.1109/TCI.2025.3544070","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544070","url":null,"abstract":"Existing low-light light field (LF) image enhancement methods have difficulty in recovering image details due to underutilization of the 4D LF structure. Therefore, we propose a novel reshaping learning network named RLNet to better exploit the 4D LF structure. The main idea of the network is to divide and conquer, that is, to combine complementary traits from multiple 2D representations reshaped by LF data. Specifically, the network firstly learns local information and global information from neighboring views and all views, respectively. Then, based on the above extracted information, reshaping learning modules (RLMs) are employed to fully exploit the 4D LF structure, which reshape LF images into different 2D representations, and then simultaneously mine their corresponding traits. In addition, the network also integrates contrast attention modules (CAMs) to enhance image details. To validate our idea, we comprehensively evaluate the performance of our RLNet on three publicly available datasets with different illumination. Experimental results demonstrate the superior performance of our method. Our method achieves an average PSNR gain of 0.45 dB compared to current state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"421-434"},"PeriodicalIF":4.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater Image Enhancement and Attenuation Restoration Based on Depth and Backscatter Estimation 基于深度和后向散射估计的水下图像增强与衰减恢复
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-20 DOI: 10.1109/TCI.2025.3544065
Yi-Zeng Hsieh;Ming-Ching Chang
{"title":"Underwater Image Enhancement and Attenuation Restoration Based on Depth and Backscatter Estimation","authors":"Yi-Zeng Hsieh;Ming-Ching Chang","doi":"10.1109/TCI.2025.3544065","DOIUrl":"https://doi.org/10.1109/TCI.2025.3544065","url":null,"abstract":"Underwater image analytic technologies is important to study in-water imagery in oceanography. Due to the poor lighting conditions and severe scattering and attenuation of light, underwater image quality is heavily reduced in such environment. Therefore, underwater image enhancement has always been an essential step in the analysis pipeline. We develop an Underwater Image Enhancement and Attenuation Restoration (UIEAR) algorithm from a RGB image input based on 3D depth and backscatter estimation. The proposed underwater image enhancement method achieves superior performance with light computational requirements, making it easy to deploy on edge devices. We provide the following contributions: (1) Our image enhancement is based on depth estimation using a new smooth operator on RGB pixels, which provides 3D spatial information for improved backscatter estimation and attenuation restoration. (2) We develop an improved imaging model by considering parameters relative to the camera and the local light source to estimate the attenuation and the backscatter effects. Our light source estimation is constructed from a local neighborhood of pixels to avoid distortion of the backscatter and attenuation estimation. (3) We adopt white balance adjustment to enhance underwater pixels and better match real-world colors. Our method improves general underwater image analysis including object detection and segmentation. Experimental results demonstrate the effectiveness of our algorithm in restoring and enhancing underwater images.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"321-332"},"PeriodicalIF":4.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction 深度图像先验分析及自导引在图像重建中的应用
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-02-19 DOI: 10.1109/TCI.2025.3540706
Shijun Liang;Evan Bell;Qing Qu;Rongrong Wang;Saiprasad Ravishankar
{"title":"Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction","authors":"Shijun Liang;Evan Bell;Qing Qu;Rongrong Wang;Saiprasad Ravishankar","doi":"10.1109/TCI.2025.3540706","DOIUrl":"https://doi.org/10.1109/TCI.2025.3540706","url":null,"abstract":"The ability of deep image prior (DIP) to recover high-quality images from incomplete or corrupted measurements has made it popular in inverse problems in image restoration and medical imaging, including magnetic resonance imaging (MRI). However, conventional DIP suffers from severe overfitting and spectral bias effects. In this work, we first provide an analysis of how DIP recovers information from undersampled imaging measurements by analyzing the training dynamics of the underlying networks in the kernel regime for different architectures. This study sheds light on important underlying properties for DIP-based recovery. Current research suggests that incorporating a reference image as network input can enhance DIP's performance in image reconstruction compared to using random inputs. However, obtaining suitable reference images requires supervision and raises practical difficulties. In an attempt to overcome this obstacle, we further introduce a self-driven reconstruction process that concurrently optimizes both the network weights and the input while eliminating the need for training data. Our method incorporates a novel denoiser regularization term which enables robust and stable joint estimation of both the network input and reconstructed image. We demonstrate that our self-guided method surpasses both the original DIP and modern supervised methods in terms of MR image reconstruction performance and outperforms previous DIP-based schemes for image inpainting.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"435-451"},"PeriodicalIF":4.2,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信