Juan Pablo Madrigal-Cianci, Fabio Nobile, Raul Tempone
{"title":"Analysis of a Class of Multilevel Markov Chain Monte Carlo Algorithms Based on Independent Metropolis–Hastings","authors":"Juan Pablo Madrigal-Cianci, Fabio Nobile, Raul Tempone","doi":"10.1137/21m1420927","DOIUrl":null,"url":null,"abstract":"In this work, we present, analyze, and implement a class of multilevel Markov chain Monte Carlo (ML-MCMC) algorithms based on independent Metropolis–Hastings proposals for Bayesian inverse problems. In this context, the likelihood function involves solving a complex differential model, which is then approximated on a sequence of increasingly accurate discretizations. The key point of this algorithm is to construct highly coupled Markov chains together with the standard multilevel Monte Carlo argument to obtain a better cost-tolerance complexity than a single-level MCMC algorithm. Our method extends the ideas of Dodwell et al., [SIAM/ASA J. Uncertain. Quantif., 3 (2015), pp. 1075–1108] to a wider range of proposal distributions. We present a thorough convergence analysis of the ML-MCMC method proposed, and show, in particular, that (i) under some mild conditions on the (independent) proposals and the family of posteriors, there exists a unique invariant probability measure for the coupled chains generated by our method, and (ii) that such coupled chains are uniformly ergodic. We also generalize the cost-tolerance theorem of Dodwell et al. to our wider class of ML-MCMC algorithms. Finally, we propose a self-tuning continuation-type ML-MCMC algorithm. The presented method is tested on an array of academic examples, where some of our theoretical results are numerically verified. These numerical experiments evidence how our extended ML-MCMC method is robust when targeting some pathological posteriors, for which some of the previously proposed ML-MCMC algorithms fail.","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/21m1420927","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 2
Abstract
In this work, we present, analyze, and implement a class of multilevel Markov chain Monte Carlo (ML-MCMC) algorithms based on independent Metropolis–Hastings proposals for Bayesian inverse problems. In this context, the likelihood function involves solving a complex differential model, which is then approximated on a sequence of increasingly accurate discretizations. The key point of this algorithm is to construct highly coupled Markov chains together with the standard multilevel Monte Carlo argument to obtain a better cost-tolerance complexity than a single-level MCMC algorithm. Our method extends the ideas of Dodwell et al., [SIAM/ASA J. Uncertain. Quantif., 3 (2015), pp. 1075–1108] to a wider range of proposal distributions. We present a thorough convergence analysis of the ML-MCMC method proposed, and show, in particular, that (i) under some mild conditions on the (independent) proposals and the family of posteriors, there exists a unique invariant probability measure for the coupled chains generated by our method, and (ii) that such coupled chains are uniformly ergodic. We also generalize the cost-tolerance theorem of Dodwell et al. to our wider class of ML-MCMC algorithms. Finally, we propose a self-tuning continuation-type ML-MCMC algorithm. The presented method is tested on an array of academic examples, where some of our theoretical results are numerically verified. These numerical experiments evidence how our extended ML-MCMC method is robust when targeting some pathological posteriors, for which some of the previously proposed ML-MCMC algorithms fail.