A Scalable Training Strategy for Blind Multi-Distribution Noise Removal

Kevin Zhang;Sakshum Kulshrestha;Christopher A. Metzler
{"title":"A Scalable Training Strategy for Blind Multi-Distribution Noise Removal","authors":"Kevin Zhang;Sakshum Kulshrestha;Christopher A. Metzler","doi":"10.1109/TIP.2024.3482185","DOIUrl":null,"url":null,"abstract":"Despite recent advances, developing general-purpose universal denoising and artifact-removal networks remains largely an open problem: Given fixed network weights, one inherently trades-off specialization at one task (e.g., removing Poisson noise) for performance at another (e.g., removing speckle noise). In addition, training such a network is challenging due to the curse of dimensionality: As one increases the dimensions of the specification-space (i.e., the number of parameters needed to describe the noise distribution) the number of unique specifications one needs to train for grows exponentially. Uniformly sampling this space will result in a network that does well at very challenging problem specifications but poorly at easy problem specifications, where even large errors will have a small effect on the overall mean squared error. In this work we propose training denoising networks using an adaptive-sampling/active-learning strategy. Our work improves upon a recently proposed universal denoiser training strategy by extending these results to higher dimensions and by incorporating a polynomial approximation of the true specification-loss landscape. This approximation allows us to reduce training times by almost two orders of magnitude. We test our method on simulated joint Poisson-Gaussian-Speckle noise and demonstrate that with our proposed training strategy, a single blind, generalist denoiser network can achieve peak signal-to-noise ratios within a uniform bound of specialized denoiser networks across a large range of operating conditions. We also capture a small dataset of images with varying amounts of joint Poisson-Gaussian-Speckle noise and demonstrate that a universal denoiser trained using our adaptive-sampling strategy outperforms uniformly trained baselines.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6216-6226"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10729723/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Despite recent advances, developing general-purpose universal denoising and artifact-removal networks remains largely an open problem: Given fixed network weights, one inherently trades-off specialization at one task (e.g., removing Poisson noise) for performance at another (e.g., removing speckle noise). In addition, training such a network is challenging due to the curse of dimensionality: As one increases the dimensions of the specification-space (i.e., the number of parameters needed to describe the noise distribution) the number of unique specifications one needs to train for grows exponentially. Uniformly sampling this space will result in a network that does well at very challenging problem specifications but poorly at easy problem specifications, where even large errors will have a small effect on the overall mean squared error. In this work we propose training denoising networks using an adaptive-sampling/active-learning strategy. Our work improves upon a recently proposed universal denoiser training strategy by extending these results to higher dimensions and by incorporating a polynomial approximation of the true specification-loss landscape. This approximation allows us to reduce training times by almost two orders of magnitude. We test our method on simulated joint Poisson-Gaussian-Speckle noise and demonstrate that with our proposed training strategy, a single blind, generalist denoiser network can achieve peak signal-to-noise ratios within a uniform bound of specialized denoiser networks across a large range of operating conditions. We also capture a small dataset of images with varying amounts of joint Poisson-Gaussian-Speckle noise and demonstrate that a universal denoiser trained using our adaptive-sampling strategy outperforms uniformly trained baselines.
多分布盲法噪声消除的可扩展训练策略
尽管最近取得了一些进展,但开发通用的通用去噪和去除伪影网络在很大程度上仍然是一个有待解决的问题:在网络权重固定的情况下,人们需要在一项任务(如去除泊松噪声)的专业性与另一项任务(如去除斑点噪声)的性能之间进行权衡。此外,由于 "维度诅咒"(curse of dimensionality)的存在,训练这样的网络具有挑战性:随着规格空间维度(即描述噪声分布所需的参数数量)的增加,需要训练的独特规格数量也呈指数增长。对这一空间进行均匀采样会导致网络在处理极具挑战性的问题规格时表现出色,但在处理简单的问题规格时却表现不佳,在这种情况下,即使误差很大,对总体均方误差的影响也很小。在这项工作中,我们建议使用自适应采样/主动学习策略来训练去噪网络。我们的工作改进了最近提出的通用去噪器训练策略,将这些结果扩展到了更高的维度,并加入了对真实规格损失情况的多项式近似。这种近似方法使我们的训练时间缩短了近两个数量级。我们在模拟泊松-高斯-啄木鸟联合噪声的基础上测试了我们的方法,结果表明,采用我们提出的训练策略,单个盲通用去噪网络可以在大范围的操作条件下,在专用去噪网络的统一范围内达到峰值信噪比。我们还捕捉了一个具有不同数量泊松-高斯-啄木鸟联合噪声的小型图像数据集,并证明使用我们的自适应采样策略训练的通用去噪器优于统一训练的基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信