DECTNet: A detail enhanced CNN-Transformer network for single-image deraining

Liping Wang , Guangwei Gao
{"title":"DECTNet: A detail enhanced CNN-Transformer network for single-image deraining","authors":"Liping Wang ,&nbsp;Guangwei Gao","doi":"10.1016/j.cogr.2024.12.002","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, Convolutional Neural Networks (CNN) and Transformers have been widely adopted in image restoration tasks. While CNNs are highly effective at extracting local information, they struggle to capture global context. Conversely, Transformers excel at capturing global information but often face challenges in preserving spatial and structural details. To address these limitations and harness both global and local features for single-image deraining, we propose a novel approach called the Detail Enhanced CNN-Transformer Network (DECTNet). DECTNet integrates two key components: the Enhanced Residual Feature Distillation Block (ERFDB) and the Dual Attention Spatial Transformer Block (DASTB). In the ERFDB, we introduce a mixed attention mechanism, incorporating channel information-enhanced layers within the residual feature distillation structure. This design facilitates a more effective step-by-step extraction of detailed information, enabling the network to restore fine-grained image details progressively. Additionally, in the DASTB, we utilize spatial attention to refine features obtained from multi-head self-attention, while the feed-forward network leverages channel information to enhance detail preservation further. This complementary use of CNNs and Transformers allows DECTNet to balance global context understanding with detailed spatial restoration. Extensive experiments have demonstrated that DECTNet outperforms some state-of-the-art methods on single-image deraining tasks. Furthermore, our model achieves competitive results on three low-light datasets and a single-image desnowing dataset, highlighting its versatility and effectiveness across different image restoration challenges.</div></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"5 ","pages":"Pages 48-60"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Robotics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667241325000011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, Convolutional Neural Networks (CNN) and Transformers have been widely adopted in image restoration tasks. While CNNs are highly effective at extracting local information, they struggle to capture global context. Conversely, Transformers excel at capturing global information but often face challenges in preserving spatial and structural details. To address these limitations and harness both global and local features for single-image deraining, we propose a novel approach called the Detail Enhanced CNN-Transformer Network (DECTNet). DECTNet integrates two key components: the Enhanced Residual Feature Distillation Block (ERFDB) and the Dual Attention Spatial Transformer Block (DASTB). In the ERFDB, we introduce a mixed attention mechanism, incorporating channel information-enhanced layers within the residual feature distillation structure. This design facilitates a more effective step-by-step extraction of detailed information, enabling the network to restore fine-grained image details progressively. Additionally, in the DASTB, we utilize spatial attention to refine features obtained from multi-head self-attention, while the feed-forward network leverages channel information to enhance detail preservation further. This complementary use of CNNs and Transformers allows DECTNet to balance global context understanding with detailed spatial restoration. Extensive experiments have demonstrated that DECTNet outperforms some state-of-the-art methods on single-image deraining tasks. Furthermore, our model achieves competitive results on three low-light datasets and a single-image desnowing dataset, highlighting its versatility and effectiveness across different image restoration challenges.
DECTNet:一个细节增强的CNN-Transformer网络,用于单图像训练
近年来,卷积神经网络(CNN)和变形金刚被广泛应用于图像恢复任务中。虽然cnn在提取局部信息方面非常有效,但它们很难捕捉到全球背景。相反,变形金刚擅长捕捉全局信息,但在保留空间和结构细节方面往往面临挑战。为了解决这些限制并利用全局和局部特征进行单图像脱除,我们提出了一种称为细节增强cnn -变压器网络(DECTNet)的新方法。DECTNet集成了两个关键组件:增强残余特征蒸馏块(ERFDB)和双注意空间变压器块(DASTB)。在ERFDB中,我们引入了一种混合注意机制,在残差特征蒸馏结构中加入了通道信息增强层。这种设计有利于更有效的分步提取细节信息,使网络能够逐步恢复细粒度的图像细节。此外,在DASTB中,我们利用空间注意来细化从多头自注意中获得的特征,而前馈网络利用通道信息进一步增强细节保存。cnn和transformer的这种互补使用使DECTNet能够平衡全球背景理解和详细的空间恢复。大量的实验表明,DECTNet在单图像训练任务上优于一些最先进的方法。此外,我们的模型在三个低光照数据集和一个单图像降雪数据集上取得了具有竞争力的结果,突出了其在不同图像恢复挑战中的通用性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.40
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信