Re-Boosting Self-Collaboration Parallel Prompt GAN for Unsupervised Image Restoration.

Xin Lin, Yuyan Zhou, Jingtong Yue, Chao Ren, Kelvin C K Chan, Lu Qi, Ming-Hsuan Yang
{"title":"Re-Boosting Self-Collaboration Parallel Prompt GAN for Unsupervised Image Restoration.","authors":"Xin Lin, Yuyan Zhou, Jingtong Yue, Chao Ren, Kelvin C K Chan, Lu Qi, Ming-Hsuan Yang","doi":"10.1109/TPAMI.2025.3589606","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning methods have demonstrated state-of-the-art performance in image restoration, especially when trained on large-scale paired datasets. However, acquiring paired data in real-world scenarios poses a significant challenge. Unsupervised restoration approaches based on generative adversarial networks (GANs) offer a promising solution without requiring paired datasets. Yet, these GAN-based approaches struggle to surpass the performance of conventional unsupervised GAN-based frameworks without significantly modifying model structures or increasing the computational complexity. To address these issues, we propose a self-collaboration (SC) strategy for existing restoration models. This strategy utilizes information from the previous stage as feedback to guide subsequent stages, achieving significant performance improvement without increasing the framework's inference complexity. The SC strategy comprises a prompt learning (PL) module and a restorer ($Res$). It iteratively replaces the previous less powerful fixed restorer $\\overline{Res}$ in the PL module with a more powerful $Res$. The enhanced PL module generates better pseudo-degraded/clean image pairs, leading to a more powerful $Res$ for the next iteration. Our SC can significantly improve the $Res$ 's performance by over 1.5dB without adding extra parameters or computational complexity during inference. Meanwhile, existing self-ensemble (SE) and our SC strategies enhance the performance of pre-trained restorers from different perspectives. As SE increases computational complexity during inference, we propose a re-boosting module to the SC (Reb-SC) to improve the SC strategy further by incorporating SE into SC without increasing inference time. This approach further enhances the restorer's performance by approximately 0.3 dB. Additionally, we present a baseline framework that includes parallel generative adversarial branches with complementary \"self-synthesis\" and \"unpaired-synthesis\" constraints, ensuring the effectiveness of the training framework. Extensive experimental results on restoration tasks demonstrate that the proposed model performs favorably against existing state-of-the-art unsupervised restoration methods. Source code and trained models are publicly available at: https://github.com/linxin0/RSCP2GAN.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPAMI.2025.3589606","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning methods have demonstrated state-of-the-art performance in image restoration, especially when trained on large-scale paired datasets. However, acquiring paired data in real-world scenarios poses a significant challenge. Unsupervised restoration approaches based on generative adversarial networks (GANs) offer a promising solution without requiring paired datasets. Yet, these GAN-based approaches struggle to surpass the performance of conventional unsupervised GAN-based frameworks without significantly modifying model structures or increasing the computational complexity. To address these issues, we propose a self-collaboration (SC) strategy for existing restoration models. This strategy utilizes information from the previous stage as feedback to guide subsequent stages, achieving significant performance improvement without increasing the framework's inference complexity. The SC strategy comprises a prompt learning (PL) module and a restorer ($Res$). It iteratively replaces the previous less powerful fixed restorer $\overline{Res}$ in the PL module with a more powerful $Res$. The enhanced PL module generates better pseudo-degraded/clean image pairs, leading to a more powerful $Res$ for the next iteration. Our SC can significantly improve the $Res$ 's performance by over 1.5dB without adding extra parameters or computational complexity during inference. Meanwhile, existing self-ensemble (SE) and our SC strategies enhance the performance of pre-trained restorers from different perspectives. As SE increases computational complexity during inference, we propose a re-boosting module to the SC (Reb-SC) to improve the SC strategy further by incorporating SE into SC without increasing inference time. This approach further enhances the restorer's performance by approximately 0.3 dB. Additionally, we present a baseline framework that includes parallel generative adversarial branches with complementary "self-synthesis" and "unpaired-synthesis" constraints, ensuring the effectiveness of the training framework. Extensive experimental results on restoration tasks demonstrate that the proposed model performs favorably against existing state-of-the-art unsupervised restoration methods. Source code and trained models are publicly available at: https://github.com/linxin0/RSCP2GAN.

基于自协作并行提示GAN的无监督图像恢复。
深度学习方法在图像恢复方面表现出了最先进的性能,特别是在大规模成对数据集上进行训练时。然而,在现实场景中获取配对数据是一个重大挑战。基于生成对抗网络(GANs)的无监督恢复方法提供了一种很有前途的解决方案,无需配对数据集。然而,这些基于gan的方法在不显著修改模型结构或增加计算复杂性的情况下,难以超越传统的无监督gan框架的性能。为了解决这些问题,我们提出了一种针对现有恢复模型的自协作(SC)策略。该策略利用前一阶段的信息作为反馈来指导后续阶段,在不增加框架推理复杂性的情况下实现了显著的性能改进。SC策略包括一个快速学习(PL)模块和一个恢复器(Res$)。它迭代地用更强大的$Res$替换PL模块中以前功能较弱的固定恢复器$\overline{Res}$。增强的PL模块生成更好的伪退化/干净图像对,从而为下一次迭代提供更强大的$Res$。我们的SC可以在不增加额外参数或推理过程中的计算复杂性的情况下显着提高Res的性能超过1.5dB。同时,现有的自集成策略和SC策略从不同的角度提高了预训练修复体的性能。由于SE在推理过程中增加了计算复杂度,我们提出了SC的再提升模块(Reb-SC),通过将SE纳入SC而不增加推理时间来进一步改进SC策略。这种方法进一步提高了恢复器的性能约0.3 dB。此外,我们提出了一个基线框架,其中包括具有互补“自合成”和“非配对合成”约束的并行生成对抗分支,以确保训练框架的有效性。修复任务的大量实验结果表明,所提出的模型优于现有的最先进的无监督修复方法。源代码和训练过的模型可以在:https://github.com/linxin0/RSCP2GAN上公开获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信