{"title":"Convergence of Langevin-simulated annealing algorithms with multiplicative noise II: Total variation","authors":"Pierre Bras, G. Pagès","doi":"10.1515/mcma-2023-2009","DOIUrl":null,"url":null,"abstract":"Abstract We study the convergence of Langevin-simulated annealing type algorithms with multiplicative noise, i.e. for V : R d → R V\\colon\\mathbb{R}^{d}\\to\\mathbb{R} a potential function to minimize, we consider the stochastic differential equation d Y t = − σ σ ⊤ ∇ V ( Y t ) d t + a ( t ) σ ( Y t ) d W t + a ( t ) 2 Υ ( Y t ) d t dY_{t}=-\\sigma\\sigma^{\\top}\\nabla V(Y_{t})\\,dt+a(t)\\sigma(Y_{t})\\,dW_{t}+a(t)^{2}\\Upsilon(Y_{t})\\,dt , where ( W t ) (W_{t}) is a Brownian motion, σ : R d → M d ( R ) \\sigma\\colon\\mathbb{R}^{d}\\to\\mathcal{M}_{d}(\\mathbb{R}) is an adaptive (multiplicative) noise, a : R + → R + a\\colon\\mathbb{R}^{+}\\to\\mathbb{R}^{+} is a function decreasing to 0 and where Υ is a correction term. Allowing 𝜎 to depend on the position brings faster convergence in comparison with the classical Langevin equation d Y t = − ∇ V ( Y t ) d t + σ d W t dY_{t}=-\\nabla V(Y_{t})\\,dt+\\sigma\\,dW_{t} . In a previous paper, we established the convergence in L 1 L^{1} -Wasserstein distance of Y t Y_{t} and of its associated Euler scheme Y ¯ t \\bar{Y}_{t} to argmin ( V ) \\operatorname{argmin}(V) with the classical schedule a ( t ) = A log − 1 / 2 ( t ) a(t)=A\\log^{-1/2}(t) . In the present paper, we prove the convergence in total variation distance. The total variation case appears more demanding to deal with and requires regularization lemmas.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"29 1","pages":"203 - 219"},"PeriodicalIF":0.8000,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Monte Carlo Methods and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/mcma-2023-2009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 4
Abstract
Abstract We study the convergence of Langevin-simulated annealing type algorithms with multiplicative noise, i.e. for V : R d → R V\colon\mathbb{R}^{d}\to\mathbb{R} a potential function to minimize, we consider the stochastic differential equation d Y t = − σ σ ⊤ ∇ V ( Y t ) d t + a ( t ) σ ( Y t ) d W t + a ( t ) 2 Υ ( Y t ) d t dY_{t}=-\sigma\sigma^{\top}\nabla V(Y_{t})\,dt+a(t)\sigma(Y_{t})\,dW_{t}+a(t)^{2}\Upsilon(Y_{t})\,dt , where ( W t ) (W_{t}) is a Brownian motion, σ : R d → M d ( R ) \sigma\colon\mathbb{R}^{d}\to\mathcal{M}_{d}(\mathbb{R}) is an adaptive (multiplicative) noise, a : R + → R + a\colon\mathbb{R}^{+}\to\mathbb{R}^{+} is a function decreasing to 0 and where Υ is a correction term. Allowing 𝜎 to depend on the position brings faster convergence in comparison with the classical Langevin equation d Y t = − ∇ V ( Y t ) d t + σ d W t dY_{t}=-\nabla V(Y_{t})\,dt+\sigma\,dW_{t} . In a previous paper, we established the convergence in L 1 L^{1} -Wasserstein distance of Y t Y_{t} and of its associated Euler scheme Y ¯ t \bar{Y}_{t} to argmin ( V ) \operatorname{argmin}(V) with the classical schedule a ( t ) = A log − 1 / 2 ( t ) a(t)=A\log^{-1/2}(t) . In the present paper, we prove the convergence in total variation distance. The total variation case appears more demanding to deal with and requires regularization lemmas.