Prompting Rain Off: Evolving Compact Dual Prompts for Continual De-Raining.

IF 13.7
Minghao Liu, Wenhan Yang, Jiaying Liu
{"title":"Prompting Rain Off: Evolving Compact Dual Prompts for Continual De-Raining.","authors":"Minghao Liu, Wenhan Yang, Jiaying Liu","doi":"10.1109/TIP.2026.3689428","DOIUrl":null,"url":null,"abstract":"<p><p>In recent years, there has been notable progress in single-image rain removal, particularly focusing on static data distributions in these approaches. When dealing with data that constantly changes, the challenge of catastrophic forgetting arises, which is quite common and critical in real-world scenarios. To address this, we propose Evolving COmpact Dual Prompt Learning (EcoDPL), an efficient rehearsal-free continual learning deraining framework designed specifically for low-level vision tasks. Specifically, we design two prompt pools at both image and feature levels and insert these prompts into images and embedding tokens, for better knowledge transfer across tasks. Our adaptive weight generation module, P-Fuser, attaches an attention map to each prompt, to adaptively pay attention to different inputs, and get different weights to fuse prompts, making the inserted prompts more flexible with various inputs. Also, we introduce Grad-Tuner, a dictionary learning strategy, to compress knowledge into fewer prompts. This makes the knowledge more compact and provides more space for new prompts to learn new tasks. Our method stands out by leveraging small, learnable prompts for efficient knowledge retention across tasks, not increasing training time or parameters. Furthermore, we present an augmented method that upgrades the distance function γ from simple cosine distance to a more advanced weight generation network. We also employ a fine-tuned dictionary learning technique, compressing knowledge into a more compact form, and enhancing the ability of prompts to learn new tasks. With our new designs, the model becomes more flexible with various inputs and it compresses knowledge into fewer prompts to free up spaces to learn new tasks. Through extensive experiments on various rain removal datasets, our EcoDPL method consistently outperforms previous continual learning techniques. Notably, although EcoDPL is designed for continual learning with changing data, it also performs well with stationary data, proving its robustness and versatility.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"PP ","pages":""},"PeriodicalIF":13.7000,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TIP.2026.3689428","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, there has been notable progress in single-image rain removal, particularly focusing on static data distributions in these approaches. When dealing with data that constantly changes, the challenge of catastrophic forgetting arises, which is quite common and critical in real-world scenarios. To address this, we propose Evolving COmpact Dual Prompt Learning (EcoDPL), an efficient rehearsal-free continual learning deraining framework designed specifically for low-level vision tasks. Specifically, we design two prompt pools at both image and feature levels and insert these prompts into images and embedding tokens, for better knowledge transfer across tasks. Our adaptive weight generation module, P-Fuser, attaches an attention map to each prompt, to adaptively pay attention to different inputs, and get different weights to fuse prompts, making the inserted prompts more flexible with various inputs. Also, we introduce Grad-Tuner, a dictionary learning strategy, to compress knowledge into fewer prompts. This makes the knowledge more compact and provides more space for new prompts to learn new tasks. Our method stands out by leveraging small, learnable prompts for efficient knowledge retention across tasks, not increasing training time or parameters. Furthermore, we present an augmented method that upgrades the distance function γ from simple cosine distance to a more advanced weight generation network. We also employ a fine-tuned dictionary learning technique, compressing knowledge into a more compact form, and enhancing the ability of prompts to learn new tasks. With our new designs, the model becomes more flexible with various inputs and it compresses knowledge into fewer prompts to free up spaces to learn new tasks. Through extensive experiments on various rain removal datasets, our EcoDPL method consistently outperforms previous continual learning techniques. Notably, although EcoDPL is designed for continual learning with changing data, it also performs well with stationary data, proving its robustness and versatility.

提示雨关闭:不断发展的紧凑的双提示持续去雨。
近年来,这些方法在单幅图像去雨方面取得了显著进展,特别是在静态数据分布方面。当处理不断变化的数据时,灾难性遗忘的挑战就会出现,这在现实世界中是非常常见和关键的。为了解决这个问题,我们提出了进化紧凑双提示学习(EcoDPL),这是一种专门为低水平视觉任务设计的有效的无预演持续学习培训框架。具体来说,我们在图像和特征级别设计了两个提示池,并将这些提示插入图像和嵌入令牌中,以便更好地跨任务传递知识。我们的自适应权值生成模块P-Fuser为每个提示附加一个注意图,自适应关注不同的输入,并获得不同的权值来融合提示,使插入的提示对不同的输入更加灵活。此外,我们还介绍了一种词典学习策略,即Grad-Tuner,将知识压缩到更少的提示中。这使得知识更加紧凑,并为学习新任务的新提示提供更多空间。我们的方法通过利用小的、可学习的提示来有效地跨任务保留知识,而不增加训练时间或参数,从而脱颖而出。此外,我们提出了一种增强方法,将距离函数γ从简单的余弦距离升级为更高级的权值生成网络。我们还采用了一种微调的字典学习技术,将知识压缩成更紧凑的形式,并增强了提示学习新任务的能力。通过我们的新设计,模型在各种输入下变得更加灵活,它将知识压缩到更少的提示中,从而腾出空间来学习新任务。通过对各种降雨数据集的广泛实验,我们的EcoDPL方法始终优于以前的持续学习技术。值得注意的是,尽管EcoDPL是为不断学习变化的数据而设计的,但它也可以很好地处理固定数据,证明了它的鲁棒性和多功能性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书