{"title":"Iteratively Refined Image Reconstruction with Learned Attentive Regularizers.","authors":"Mehrsa Pourya, Sebastian Neumayer, Michael Unser","doi":"10.1080/01630563.2024.2384849","DOIUrl":null,"url":null,"abstract":"<p><p>We propose a regularization scheme for image reconstruction that leverages the power of deep learning while hinging on classic sparsity-promoting models. Many deep-learning-based models are hard to interpret and cumbersome to analyze theoretically. In contrast, our scheme is interpretable because it corresponds to the minimization of a series of convex problems. For each problem in the series, a mask is generated based on the previous solution to refine the regularization strength spatially. In this way, the model becomes progressively attentive to the image structure. For the underlying update operator, we prove the existence of a fixed point. As a special case, we investigate a mask generator for which the fixed-point iterations converge to a critical point of an explicit energy functional. In our experiments, we match the performance of state-of-the-art learned variational models for the solution of inverse problems. Additionally, we offer a promising balance between interpretability, theoretical guarantees, reliability, and performance.</p>","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"45 7-9","pages":"411-440"},"PeriodicalIF":1.4000,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371266/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Numerical Functional Analysis and Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1080/01630563.2024.2384849","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
We propose a regularization scheme for image reconstruction that leverages the power of deep learning while hinging on classic sparsity-promoting models. Many deep-learning-based models are hard to interpret and cumbersome to analyze theoretically. In contrast, our scheme is interpretable because it corresponds to the minimization of a series of convex problems. For each problem in the series, a mask is generated based on the previous solution to refine the regularization strength spatially. In this way, the model becomes progressively attentive to the image structure. For the underlying update operator, we prove the existence of a fixed point. As a special case, we investigate a mask generator for which the fixed-point iterations converge to a critical point of an explicit energy functional. In our experiments, we match the performance of state-of-the-art learned variational models for the solution of inverse problems. Additionally, we offer a promising balance between interpretability, theoretical guarantees, reliability, and performance.
期刊介绍:
Numerical Functional Analysis and Optimization is a journal aimed at development and applications of functional analysis and operator-theoretic methods in numerical analysis, optimization and approximation theory, control theory, signal and image processing, inverse and ill-posed problems, applied and computational harmonic analysis, operator equations, and nonlinear functional analysis. Not all high-quality papers within the union of these fields are within the scope of NFAO. Generalizations and abstractions that significantly advance their fields and reinforce the concrete by providing new insight and important results for problems arising from applications are welcome. On the other hand, technical generalizations for their own sake with window dressing about applications, or variants of known results and algorithms, are not suitable for this journal.
Numerical Functional Analysis and Optimization publishes about 70 papers per year. It is our current policy to limit consideration to one submitted paper by any author/co-author per two consecutive years. Exception will be made for seminal papers.