{"title":"Image Segmentation Using Bayesian Inference for Convex Variant Mumford–Shah Variational Model","authors":"Xu Xiao, Youwei Wen, Raymond Chan, Tieyong Zeng","doi":"10.1137/23m1545379","DOIUrl":"https://doi.org/10.1137/23m1545379","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 248-272, March 2024. <br/> Abstract. The Mumford–Shah model is a classical segmentation model, but its objective function is nonconvex. The smoothing and thresholding (SaT) approach is a convex variant of the Mumford–Shah model, which seeks a smoothed approximation solution to the Mumford–Shah model. The SaT approach separates the segmentation into two stages: first, a convex energy function is minimized to obtain a smoothed image; then, a thresholding technique is applied to segment the smoothed image. The energy function consists of three weighted terms and the weights are called the regularization parameters. Selecting appropriate regularization parameters is crucial to achieving effective segmentation results. Traditionally, the regularization parameters are chosen by trial-and-error, which is a very time-consuming procedure and is not practical in real applications. In this paper, we apply a Bayesian inference approach to infer the regularization parameters and estimate the smoothed image. We analyze the convex variant Mumford–Shah variational model from a statistical perspective and then construct a hierarchical Bayesian model. A mean field variational family is used to approximate the posterior distribution. The variational density of the smoothed image is assumed to have a Gaussian density, and the hyperparameters are assumed to have Gamma variational densities. All the parameters in the Gaussian density and Gamma densities are iteratively updated. Experimental results show that the proposed approach is capable of generating high-quality segmentation results. Although the proposed approach contains an inference step to estimate the regularization parameters, it requires less CPU running time to obtain the smoothed image than previous methods.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139585765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Tensor CUR Decompositions: Rapid Low-Tucker-Rank Tensor Recovery with Sparse Corruptions","authors":"HanQin Cai, Zehan Chao, Longxiu Huang, Deanna Needell","doi":"10.1137/23m1574282","DOIUrl":"https://doi.org/10.1137/23m1574282","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 225-247, March 2024. <br/> Abstract. We study the tensor robust principal component analysis (TRPCA) problem, a tensorial extension of matrix robust principal component analysis, which aims to split the given tensor into an underlying low-rank component and a sparse outlier component. This work proposes a fast algorithm, called robust tensor CUR decompositions (RTCUR), for large-scale nonconvex TRPCA problems under the Tucker rank setting. RTCUR is developed within a framework of alternating projections that projects between the set of low-rank tensors and the set of sparse tensors. We utilize the recently developed tensor CUR decomposition to substantially reduce the computational complexity in each projection. In addition, we develop four variants of RTCUR for different application settings. We demonstrate the effectiveness and computational advantages of RTCUR against state-of-the-art methods on both synthetic and real-world datasets.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139554888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Direct Imaging Methods for Reconstructing a Locally Rough Interface from Phaseless Total-Field Data or Phased Far-Field Data","authors":"Long Li, Jiansheng Yang, Bo Zhang, Haiwen Zhang","doi":"10.1137/23m1571393","DOIUrl":"https://doi.org/10.1137/23m1571393","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 188-224, March 2024. <br/> Abstract. This paper is concerned with the problem of inverse scattering of time-harmonic acoustic plane waves by a two-layered medium with a locally rough interface in two dimensions. A direct imaging method is proposed to reconstruct the locally rough interface from the phaseless total-field data measured on the upper half of the circle with a large radius at a fixed frequency or from the phased far-field data measured on the upper half of the unit circle at a fixed frequency. The presence of the locally rough interface poses challenges in the theoretical analysis of the imaging methods. To address these challenges, a technically involved asymptotic analysis is provided for the relevant oscillatory integrals involved in the imaging methods, based mainly on the techniques and results in our recent work [L. Li, J. Yang, B. Zhang, and H. Zhang, arXiv:2208.00456, 2022] on the uniform far-field asymptotics of the scattered field for acoustic scattering in a two-layered medium. Finally, extensive numerical experiments are conducted to demonstrate the feasibility and robustness of our imaging algorithms.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139555072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conductivity Imaging from Internal Measurements with Mixed Least-Squares Deep Neural Networks","authors":"Bangti Jin, Xiyao Li, Qimeng Quan, Zhi Zhou","doi":"10.1137/23m1562536","DOIUrl":"https://doi.org/10.1137/23m1562536","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 147-187, March 2024. <br/> Abstract. In this work, we develop a novel approach using deep neural networks (DNNs) to reconstruct the conductivity distribution in elliptic problems from one measurement of the solution over the whole domain. The approach is based on a mixed reformulation of the governing equation and utilizes the standard least-squares objective, with DNNs as ansatz functions to approximate the conductivity and flux simultaneously. We provide a thorough analysis of the DNN approximations of the conductivity for both continuous and empirical losses, including rigorous error estimates that are explicit in terms of the noise level, various penalty parameters, and neural network architectural parameters (depth, width, and parameter bounds). We also provide multiple numerical experiments in two dimensions and multidimensions to illustrate distinct features of the approach, e.g., excellent stability with respect to data noise and capability of solving high-dimensional problems.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139554893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siddharth S. Iyer, Frank Ong, Xiaozhi Cao, Congyu Liao, Luca Daniel, Jonathan I. Tamir, Kawin Setsompop
{"title":"Polynomial Preconditioners for Regularized Linear Inverse Problems","authors":"Siddharth S. Iyer, Frank Ong, Xiaozhi Cao, Congyu Liao, Luca Daniel, Jonathan I. Tamir, Kawin Setsompop","doi":"10.1137/22m1530355","DOIUrl":"https://doi.org/10.1137/22m1530355","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 116-146, March 2024. <br/> Abstract. This work aims to accelerate the convergence of proximal gradient methods used to solve regularized linear inverse problems. This is achieved by designing a polynomial-based preconditioner that targets the eigenvalue spectrum of the normal operator derived from the linear operator. The preconditioner does not assume any explicit structure on the linear function and thus can be deployed in diverse applications of interest. The efficacy of the preconditioner is validated on three different Magnetic Resonance Imaging applications, where it is seen to achieve faster iterative convergence (around [math] faster, depending on the application of interest) while achieving similar reconstruction quality.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139516023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Weakly Convex Regularizers for Convergent Image-Reconstruction Algorithms","authors":"Alexis Goujon, Sebastian Neumayer, Michael Unser","doi":"10.1137/23m1565243","DOIUrl":"https://doi.org/10.1137/23m1565243","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 91-115, March 2024. <br/> Abstract.We propose to learn non-convex regularizers with a prescribed upper bound on their weak-convexity modulus. Such regularizers give rise to variational denoisers that minimize a convex energy. They rely on few parameters (less than 15,000) and offer a signal-processing interpretation as they mimic handcrafted sparsity-promoting regularizers. Through numerical experiments, we show that such denoisers outperform convex-regularization methods as well as the popular BM3D denoiser. Additionally, the learned regularizer can be deployed to solve inverse problems with iterative schemes that provably converge. For both CT and MRI reconstruction, the regularizer generalizes well and offers an excellent tradeoff between performance, number of parameters, guarantees, and interpretability when compared to other data-driven approaches.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139498043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identification of Sparsely Representable Diffusion Parameters in Elliptic Problems","authors":"Luzia N. Felber, Helmut Harbrecht, Marc Schmidlin","doi":"10.1137/23m1565346","DOIUrl":"https://doi.org/10.1137/23m1565346","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 61-90, March 2024. <br/> Abstract. We consider the task of estimating the unknown diffusion parameter in an elliptic PDE as a model problem to develop and test the effectiveness and robustness to noise of reconstruction schemes with sparsity regularization. To this end, the model problem is recast as a nonlinear infinite dimensional optimization problem, where the logarithm of the unknown diffusion parameter is modeled using a linear combination of the elements of a dictionary, i.e., a known bounded sequence of [math] functions, with unknown coefficients that form a sequence in [math]. We show that the regularization of this nonlinear optimization problem using a weighted [math]-norm has minimizers that are finitely supported. We then propose modifications of well-known algorithms (ISTA and FISTA) to find a minimizer of this weighted [math]-norm regularized nonlinear optimization problem that accounts for the fact that in general the smooth part of the functional being optimized is a functional only defined over [math]. We also introduce semismooth methods (ASISTA and FASISTA) for finding a minimizer, which locally uses Gauss–Newton type surrogate models that additionally are stabilized by means of a Levenberg–Marquardt type approach. Our numerical examples show that the regularization with the weighted [math]-norm indeed does make the estimation more robust with respect to noise. Moreover, the numerical examples also demonstrate that the ASISTA and FASISTA methods are quite efficient, outperforming both ISTA and FISTA.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139498035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avrajit Ghosh, Michael McCann, Madeline Mitchell, Saiprasad Ravishankar
{"title":"Learning Sparsity-Promoting Regularizers Using Bilevel Optimization","authors":"Avrajit Ghosh, Michael McCann, Madeline Mitchell, Saiprasad Ravishankar","doi":"10.1137/22m1506547","DOIUrl":"https://doi.org/10.1137/22m1506547","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 31-60, March 2024. <br/> Abstract. We present a gradient-based heuristic method for supervised learning of sparsity-promoting regularizers for denoising signals and images. Sparsity-promoting regularization is a key ingredient in solving modern signal reconstruction problems; however, the operators underlying these regularizers are usually either designed by hand or learned from data in an unsupervised way. The recent success of supervised learning (e.g., with convolutional neural networks) in solving image reconstruction problems suggests that it could be a fruitful approach to designing regularizers. Towards this end, we propose to denoise signals using a variational formulation with a parametric, sparsity-promoting regularizer, where the parameters of the regularizer are learned to minimize the mean squared error of reconstructions on a training set of ground truth image and measurement pairs. Training involves solving a challenging bilevel optimization problem; we derive an expression for the gradient of the training loss using the closed-form solution of the denoising problem and provide an accompanying gradient descent algorithm to minimize it. Our experiments with structured 1D signals and natural images indicate that the proposed method can learn an operator that outperforms well-known regularizers (total variation, DCT-sparsity, and unsupervised dictionary learning) and collaborative filtering for denoising.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139411114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Variational Model for Nonuniform Low-Light Image Enhancement","authors":"Fan Jia, Shen Mao, Xue-Cheng Tai, Tieyong Zeng","doi":"10.1137/22m1543161","DOIUrl":"https://doi.org/10.1137/22m1543161","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 1, Page 1-30, March 2024. <br/> Abstract. Low-light image enhancement plays an important role in computer vision applications, which is a fundamental low-level task and can affect high-level computer vision tasks. To solve this ill-posed problem, a lot of methods have been proposed to enhance low-light images. However, their performance degrades significantly under nonuniform lighting conditions. Due to the rapid variation of illuminance in different regions in natural images, it is challenging to enhance low-light parts and retain normal-light parts simultaneously in the same image. Commonly, either the low-light parts are underenhanced or the normal-light parts are overenhanced, accompanied by color distortion and artifacts. To overcome this problem, we propose a simple and effective Retinex-based model with reflectance map reweighting for images under nonuniform lighting conditions. An alternating proximal gradient (APG) algorithm is proposed to solve the proposed model, in which the illumination map, the reflectance map, and the weighting map are updated iteratively. To make our model applicable to a wide range of light conditions, we design an initialization scheme for the weighting map. A theoretical analysis of the existence of the solution to our model and the convergence of the APG algorithm are also established. A series of experiments on real-world low-light images are conducted, which demonstrate the effectiveness of our method.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139102997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Supervised Deep Learning for Image Reconstruction: A Langevin Monte Carlo Approach","authors":"Ji Li, Weixi Wang, Hui Ji","doi":"10.1137/23m1548025","DOIUrl":"https://doi.org/10.1137/23m1548025","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 16, Issue 4, Page 2247-2284, December 2023. <br/> Abstract. Deep learning has proved to be a powerful tool for solving inverse problems in imaging, and most of the related work is based on supervised learning. In many applications, collecting truth images is a challenging and costly task, and the prerequisite of having a training dataset of truth images limits its applicability. This paper proposes a self-supervised deep learning method for solving inverse imaging problems that does not require any training samples. The proposed approach is built on a reparametrization of latent images using a convolutional neural network, and the reconstruction is motivated by approximating the minimum mean square error estimate of the latent image using a Langevin dynamics–based Monte Carlo (MC) method. To efficiently sample the network weights in the context of image reconstruction, we propose a Langevin MC scheme called Adam-LD, inspired by the well-known optimizer in deep learning, Adam. The proposed method is applied to solve linear and nonlinear inverse problems, specifically, sparse-view computed tomography image reconstruction and phase retrieval. Our experiments demonstrate that the proposed method outperforms existing unsupervised or self-supervised solutions in terms of reconstruction quality.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138528994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}