分布-解耦学习网络:空间和频率解耦的单幅图像去噪创新方法

Yabo Wu, Wenting Li, Ziyang Chen, Hui Wen, Zhongwei Cui, Yongjun Zhang
{"title":"分布-解耦学习网络:空间和频率解耦的单幅图像去噪创新方法","authors":"Yabo Wu, Wenting Li, Ziyang Chen, Hui Wen, Zhongwei Cui, Yongjun Zhang","doi":"10.1007/s00371-024-03556-3","DOIUrl":null,"url":null,"abstract":"<p>Image dehazing methods face challenges in addressing the high coupling between haze and object feature distributions in the spatial and frequency domains. This coupling often results in oversharpening, color distortion, and blurring of details during the dehazing process. To address these issues, we introduce the distribution-decouple module (DDM) and dual-frequency attention mechanism (DFAM). The DDM works effectively in the spatial domain, decoupling haze and object features through a feature decoupler and then uses a two-stream modulator to further reduce the negative impact of haze on the distribution of object features. Simultaneously, the DFAM focuses on decoupling information in the frequency domain, separating high- and low-frequency information and applying attention to different frequency components for frequency calibration. Finally, we introduce a novel dehazing network, the distribution-decouple learning network for single image dehazing with spatial and frequency decoupling (DDLNet). This network integrates DDM and DFAM, effectively addressing the issue of coupled feature distributions in both spatial and frequency domains, thereby enhancing the clarity and fidelity of the dehazed images. Extensive experiments indicate the outperformance of our DDLNet when compared to the state-of-the-art (SOTA) methods, achieving a 1.50 dB increase in PSNR on the SOTS-indoor dataset. Concomitantly, it indicates a 1.26 dB boost on the SOTS-outdoor dataset. Additionally, our method performs significantly well on the nighttime dehazing dataset NHR, achieving a 0.91 dB improvement. Code and trained models are available at https://github.com/aoe-wyb/DDLNet.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distribution-decouple learning network: an innovative approach for single image dehazing with spatial and frequency decoupling\",\"authors\":\"Yabo Wu, Wenting Li, Ziyang Chen, Hui Wen, Zhongwei Cui, Yongjun Zhang\",\"doi\":\"10.1007/s00371-024-03556-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Image dehazing methods face challenges in addressing the high coupling between haze and object feature distributions in the spatial and frequency domains. This coupling often results in oversharpening, color distortion, and blurring of details during the dehazing process. To address these issues, we introduce the distribution-decouple module (DDM) and dual-frequency attention mechanism (DFAM). The DDM works effectively in the spatial domain, decoupling haze and object features through a feature decoupler and then uses a two-stream modulator to further reduce the negative impact of haze on the distribution of object features. Simultaneously, the DFAM focuses on decoupling information in the frequency domain, separating high- and low-frequency information and applying attention to different frequency components for frequency calibration. Finally, we introduce a novel dehazing network, the distribution-decouple learning network for single image dehazing with spatial and frequency decoupling (DDLNet). This network integrates DDM and DFAM, effectively addressing the issue of coupled feature distributions in both spatial and frequency domains, thereby enhancing the clarity and fidelity of the dehazed images. Extensive experiments indicate the outperformance of our DDLNet when compared to the state-of-the-art (SOTA) methods, achieving a 1.50 dB increase in PSNR on the SOTS-indoor dataset. Concomitantly, it indicates a 1.26 dB boost on the SOTS-outdoor dataset. Additionally, our method performs significantly well on the nighttime dehazing dataset NHR, achieving a 0.91 dB improvement. Code and trained models are available at https://github.com/aoe-wyb/DDLNet.</p>\",\"PeriodicalId\":501186,\"journal\":{\"name\":\"The Visual Computer\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Visual Computer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00371-024-03556-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03556-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

图像去毛刺方法在解决灰度和物体特征分布在空间和频率域的高度耦合方面面临挑战。这种耦合往往会在去毛刺过程中导致过度锐化、色彩失真和细节模糊。为了解决这些问题,我们引入了分布解耦模块(DDM)和双频关注机制(DFAM)。DDM 在空间域有效工作,通过特征解耦器将雾霾和物体特征解耦,然后使用双流调制器进一步降低雾霾对物体特征分布的负面影响。同时,DFAM 专注于频域信息的解耦,分离高频和低频信息,并关注不同的频率成分,以进行频率校准。最后,我们介绍了一种新型去毛刺网络,即空间和频率去耦的单图像去毛刺分布-去耦学习网络(DDLNet)。该网络整合了 DDM 和 DFAM,有效解决了空间域和频率域的耦合特征分布问题,从而提高了去毛刺图像的清晰度和保真度。大量实验表明,与最先进的(SOTA)方法相比,我们的 DDLNet 性能更优,在 SOTS 室内数据集上的 PSNR 提高了 1.50 dB。同时,在 SOTS-outdoor 数据集上也提高了 1.26 分贝。此外,我们的方法在夜间去噪数据集 NHR 上的表现也非常出色,提高了 0.91 分贝。代码和训练好的模型可在 https://github.com/aoe-wyb/DDLNet 上获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Distribution-decouple learning network: an innovative approach for single image dehazing with spatial and frequency decoupling

Distribution-decouple learning network: an innovative approach for single image dehazing with spatial and frequency decoupling

Image dehazing methods face challenges in addressing the high coupling between haze and object feature distributions in the spatial and frequency domains. This coupling often results in oversharpening, color distortion, and blurring of details during the dehazing process. To address these issues, we introduce the distribution-decouple module (DDM) and dual-frequency attention mechanism (DFAM). The DDM works effectively in the spatial domain, decoupling haze and object features through a feature decoupler and then uses a two-stream modulator to further reduce the negative impact of haze on the distribution of object features. Simultaneously, the DFAM focuses on decoupling information in the frequency domain, separating high- and low-frequency information and applying attention to different frequency components for frequency calibration. Finally, we introduce a novel dehazing network, the distribution-decouple learning network for single image dehazing with spatial and frequency decoupling (DDLNet). This network integrates DDM and DFAM, effectively addressing the issue of coupled feature distributions in both spatial and frequency domains, thereby enhancing the clarity and fidelity of the dehazed images. Extensive experiments indicate the outperformance of our DDLNet when compared to the state-of-the-art (SOTA) methods, achieving a 1.50 dB increase in PSNR on the SOTS-indoor dataset. Concomitantly, it indicates a 1.26 dB boost on the SOTS-outdoor dataset. Additionally, our method performs significantly well on the nighttime dehazing dataset NHR, achieving a 0.91 dB improvement. Code and trained models are available at https://github.com/aoe-wyb/DDLNet.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信