Mingwen Shao, Weihan Liu, Qiwang Li, Lingzhuang Meng, Yecong Wan
{"title":"控制视觉语言模型增强图像复原","authors":"Mingwen Shao, Weihan Liu, Qiwang Li, Lingzhuang Meng, Yecong Wan","doi":"10.1016/j.imavis.2025.105538","DOIUrl":null,"url":null,"abstract":"<div><div>Restoring low-quality images to their original high-quality state remains a significant challenge due to inherent uncertainties, particularly in blind image restoration scenarios where the nature of degradation is unknown. Despite recent advances, many restoration techniques still grapple with robustness and adaptability across diverse degradation conditions. In this paper, we introduce an approach to augment the restoration model by exploiting the robust prior features of CLIP, a large-scale vision-language model, to enhance its proficiency in handling a broader spectrum of degradation tasks. We integrate the robust priors from CLIP into the pre-trained image restoration model via cross-attention mechanisms, and we design a Prior Adapter to modulate these features, thereby enhancing the model’s restoration performance. Additionally, we introduce an innovative prompt learning framework that harnesses CLIP’s multimodal alignment capabilities to fine-tune pre-trained restoration models. Furthermore, we utilize CLIP’s contrastive loss to ensure that the restored images align more closely with the prompts of clean images in CLIP’s latent space, thereby improving the quality of the restoration. Through comprehensive experiments, we demonstrate the effectiveness and robustness of our method, showcasing its superior adaptability to a wide array of degradation tasks. Our findings emphasize the potential of integrating vision-language models such as CLIP to advance the cutting-edge in image restoration.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"158 ","pages":"Article 105538"},"PeriodicalIF":4.2000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Controlling vision-language model for enhancing image restoration\",\"authors\":\"Mingwen Shao, Weihan Liu, Qiwang Li, Lingzhuang Meng, Yecong Wan\",\"doi\":\"10.1016/j.imavis.2025.105538\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Restoring low-quality images to their original high-quality state remains a significant challenge due to inherent uncertainties, particularly in blind image restoration scenarios where the nature of degradation is unknown. Despite recent advances, many restoration techniques still grapple with robustness and adaptability across diverse degradation conditions. In this paper, we introduce an approach to augment the restoration model by exploiting the robust prior features of CLIP, a large-scale vision-language model, to enhance its proficiency in handling a broader spectrum of degradation tasks. We integrate the robust priors from CLIP into the pre-trained image restoration model via cross-attention mechanisms, and we design a Prior Adapter to modulate these features, thereby enhancing the model’s restoration performance. Additionally, we introduce an innovative prompt learning framework that harnesses CLIP’s multimodal alignment capabilities to fine-tune pre-trained restoration models. Furthermore, we utilize CLIP’s contrastive loss to ensure that the restored images align more closely with the prompts of clean images in CLIP’s latent space, thereby improving the quality of the restoration. Through comprehensive experiments, we demonstrate the effectiveness and robustness of our method, showcasing its superior adaptability to a wide array of degradation tasks. Our findings emphasize the potential of integrating vision-language models such as CLIP to advance the cutting-edge in image restoration.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"158 \",\"pages\":\"Article 105538\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S026288562500126X\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S026288562500126X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Controlling vision-language model for enhancing image restoration
Restoring low-quality images to their original high-quality state remains a significant challenge due to inherent uncertainties, particularly in blind image restoration scenarios where the nature of degradation is unknown. Despite recent advances, many restoration techniques still grapple with robustness and adaptability across diverse degradation conditions. In this paper, we introduce an approach to augment the restoration model by exploiting the robust prior features of CLIP, a large-scale vision-language model, to enhance its proficiency in handling a broader spectrum of degradation tasks. We integrate the robust priors from CLIP into the pre-trained image restoration model via cross-attention mechanisms, and we design a Prior Adapter to modulate these features, thereby enhancing the model’s restoration performance. Additionally, we introduce an innovative prompt learning framework that harnesses CLIP’s multimodal alignment capabilities to fine-tune pre-trained restoration models. Furthermore, we utilize CLIP’s contrastive loss to ensure that the restored images align more closely with the prompts of clean images in CLIP’s latent space, thereby improving the quality of the restoration. Through comprehensive experiments, we demonstrate the effectiveness and robustness of our method, showcasing its superior adaptability to a wide array of degradation tasks. Our findings emphasize the potential of integrating vision-language models such as CLIP to advance the cutting-edge in image restoration.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.