Seyed Alireza Hosseini, Tam Thuc Do, Gene Cheung, Yuichi Tanaka
{"title":"Constructing an Interpretable Deep Denoiser by Unrolling Graph Laplacian Regularizer","authors":"Seyed Alireza Hosseini, Tam Thuc Do, Gene Cheung, Yuichi Tanaka","doi":"arxiv-2409.06676","DOIUrl":null,"url":null,"abstract":"An image denoiser can be used for a wide range of restoration problems via\nthe Plug-and-Play (PnP) architecture. In this paper, we propose a general\nframework to build an interpretable graph-based deep denoiser (GDD) by\nunrolling a solution to a maximum a posteriori (MAP) problem equipped with a\ngraph Laplacian regularizer (GLR) as signal prior. Leveraging a recent theorem\nshowing that any (pseudo-)linear denoiser $\\boldsymbol \\Psi$, under mild\nconditions, can be mapped to a solution of a MAP denoising problem regularized\nusing GLR, we first initialize a graph Laplacian matrix $\\mathbf L$ via\ntruncated Taylor Series Expansion (TSE) of $\\boldsymbol \\Psi^{-1}$. Then, we\ncompute the MAP linear system solution by unrolling iterations of the conjugate\ngradient (CG) algorithm into a sequence of neural layers as a feed-forward\nnetwork -- one that is amenable to parameter tuning. The resulting GDD network\nis \"graph-interpretable\", low in parameter count, and easy to initialize thanks\nto $\\mathbf L$ derived from a known well-performing denoiser $\\boldsymbol\n\\Psi$. Experimental results show that GDD achieves competitive image denoising\nperformance compared to competitors, but employing far fewer parameters, and is\nmore robust to covariate shift.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06676","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
An image denoiser can be used for a wide range of restoration problems via
the Plug-and-Play (PnP) architecture. In this paper, we propose a general
framework to build an interpretable graph-based deep denoiser (GDD) by
unrolling a solution to a maximum a posteriori (MAP) problem equipped with a
graph Laplacian regularizer (GLR) as signal prior. Leveraging a recent theorem
showing that any (pseudo-)linear denoiser $\boldsymbol \Psi$, under mild
conditions, can be mapped to a solution of a MAP denoising problem regularized
using GLR, we first initialize a graph Laplacian matrix $\mathbf L$ via
truncated Taylor Series Expansion (TSE) of $\boldsymbol \Psi^{-1}$. Then, we
compute the MAP linear system solution by unrolling iterations of the conjugate
gradient (CG) algorithm into a sequence of neural layers as a feed-forward
network -- one that is amenable to parameter tuning. The resulting GDD network
is "graph-interpretable", low in parameter count, and easy to initialize thanks
to $\mathbf L$ derived from a known well-performing denoiser $\boldsymbol
\Psi$. Experimental results show that GDD achieves competitive image denoising
performance compared to competitors, but employing far fewer parameters, and is
more robust to covariate shift.