Peigang Liu, Chenkang Wang, Yecong Wan, Penghui Lei
{"title":"Prompting semantic priors for image restoration","authors":"Peigang Liu, Chenkang Wang, Yecong Wan, Penghui Lei","doi":"10.1016/j.cag.2025.104167","DOIUrl":null,"url":null,"abstract":"<div><div>Restoring high-quality clean images from corrupted observations, commonly referred to as image restoration, has been a longstanding challenge in the computer vision community. Existing methods often struggle to recover fine-grained contextual details due to the lack of semantic awareness of the degraded images. To overcome this limitation, we propose a novel prompt-guided semantic-aware image restoration network, termed PSAIR, which can adaptively incorporate and exploit semantic priors of degraded images and reconstruct photographically fine-grained details. Specifically, we exploit the robust degradation filtering and semantic perception capabilities of the segmentation anything model and utilize it to provide non-destructive semantic priors to aid the network’s semantic perception of the degraded images. To absorb the semantic prior, we propose a semantic fusion module that adaptively utilizes the segmentation map to modulate the features of the degraded image thereby facilitating the network to better perceive different semantic regions. Furthermore, considering that the segmentation map does not provide semantic categories, to better facilitate the network’s customized restoration of different semantics we propose a prompt-guided module which dynamically guides the restoration of different semantics via learnable visual prompts. Comprehensive experiments demonstrate that our PSAIR can restore finer contextual details and thus outperforms existing state-of-the-art methods by a large margin in terms of quantitative and qualitative evaluation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"127 ","pages":"Article 104167"},"PeriodicalIF":2.5000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325000068","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Restoring high-quality clean images from corrupted observations, commonly referred to as image restoration, has been a longstanding challenge in the computer vision community. Existing methods often struggle to recover fine-grained contextual details due to the lack of semantic awareness of the degraded images. To overcome this limitation, we propose a novel prompt-guided semantic-aware image restoration network, termed PSAIR, which can adaptively incorporate and exploit semantic priors of degraded images and reconstruct photographically fine-grained details. Specifically, we exploit the robust degradation filtering and semantic perception capabilities of the segmentation anything model and utilize it to provide non-destructive semantic priors to aid the network’s semantic perception of the degraded images. To absorb the semantic prior, we propose a semantic fusion module that adaptively utilizes the segmentation map to modulate the features of the degraded image thereby facilitating the network to better perceive different semantic regions. Furthermore, considering that the segmentation map does not provide semantic categories, to better facilitate the network’s customized restoration of different semantics we propose a prompt-guided module which dynamically guides the restoration of different semantics via learnable visual prompts. Comprehensive experiments demonstrate that our PSAIR can restore finer contextual details and thus outperforms existing state-of-the-art methods by a large margin in terms of quantitative and qualitative evaluation.
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.