{"title":"Learning physical-aware diffusion priors for zero-shot restoration of scattering-affected images","authors":"Yuanjian Qiao , Mingwen Shao , Lingzhuang Meng , Wangmeng Zuo","doi":"10.1016/j.patcog.2025.111473","DOIUrl":null,"url":null,"abstract":"<div><div>Zero-shot image restoration methods using pre-trained diffusion models have recently achieved remarkable success, which tackle image degradation without requiring paired data. However, these methods struggle to handle real-world images with intricate nonlinear scattering degradations due to the lack of physical knowledge. To address this challenge, we propose a novel Physical-aware Diffusion model (PhyDiff) for zero-shot restoration of scattering-affected images, which involves two crucial physical guidance strategies: Transmission-guided Conditional Generation (TCG) and Prior-aware Sampling Regularization (PSR). Specifically, the TCG exploits the transmission map that reflects the degradation density to dynamically guide the restoration of different corrupted regions during the reverse diffusion process. Simultaneously, the PSR leverages the inherent statistical properties of natural images to regularize the sampling output, thereby facilitating the quality of the recovered image. With these ingenious guidance schemes, our PhyDiff achieves high-quality restoration of multiple nonlinear degradations in a zero-shot manner. Extensive experiments on real-world degraded images demonstrate that our method outperforms existing methods both quantitatively and qualitatively.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"163 ","pages":"Article 111473"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325001335","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Zero-shot image restoration methods using pre-trained diffusion models have recently achieved remarkable success, which tackle image degradation without requiring paired data. However, these methods struggle to handle real-world images with intricate nonlinear scattering degradations due to the lack of physical knowledge. To address this challenge, we propose a novel Physical-aware Diffusion model (PhyDiff) for zero-shot restoration of scattering-affected images, which involves two crucial physical guidance strategies: Transmission-guided Conditional Generation (TCG) and Prior-aware Sampling Regularization (PSR). Specifically, the TCG exploits the transmission map that reflects the degradation density to dynamically guide the restoration of different corrupted regions during the reverse diffusion process. Simultaneously, the PSR leverages the inherent statistical properties of natural images to regularize the sampling output, thereby facilitating the quality of the recovered image. With these ingenious guidance schemes, our PhyDiff achieves high-quality restoration of multiple nonlinear degradations in a zero-shot manner. Extensive experiments on real-world degraded images demonstrate that our method outperforms existing methods both quantitatively and qualitatively.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.