Explainable neural-networked variational inference: A new and fast paradigm with automatic differentiation for high-dimensional Bayesian inverse problems
{"title":"Explainable neural-networked variational inference: A new and fast paradigm with automatic differentiation for high-dimensional Bayesian inverse problems","authors":"Jiang Mo, Wang-Ji Yan","doi":"10.1016/j.ress.2025.111337","DOIUrl":null,"url":null,"abstract":"<div><div>Bayesian inference offers a rigorous framework for parameter inversion and uncertainty quantification in engineering disciplines. Despite advancements introduced by Variational Bayesian Inference (VBI), Bayesian Inverse Problems (BIPs) with implicit and non-differentiable forward solvers still face significant limitations associated with mean-field approximation, computational difficulties, poor scalability, and high-dimensional data complexities. In response to these challenges, a novel Variational Inference (VI) framework featuring equivalent neural network representation with automatic differentiation is proposed. The network architecture “VBI-Net”, comprising a variational distribution sampler, a likelihood function approximator, and a variational free energy loss function, is designed to mirror the VI framework with multivariate Gaussian variational distributions. The sampler yields posterior samples of the system model parameters and prediction errors, while incorporating the variational parameters as differentiable and explainable network parameters by reparameterization trick. The likelihood function approximator employs a neural network as a viable replacement for time-intensive and non-differentiable forward solvers, enabling efficient likelihood function evaluations. The loss function measures the goodness of the variational distribution. The seamless integration of the sampler and approximator guarantees the overall differentiability of the architecture, facilitating the utilization of automatic differentiation, gradient-based optimization methods, and enabling scalability to high-dimensional scenarios. Furthermore, the explainable neural-networked implementation scheme leverages CUDA support embedded in deep learning frameworks to inherently enable parallel computation, GPU acceleration, and optimized tensor operations. To demonstrate its efficacy, the method is applied in Bayesian model updating scenarios involving a numerical shear building and a practical structure.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"264 ","pages":"Article 111337"},"PeriodicalIF":11.0000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Reliability Engineering & System Safety","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0951832025005381","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0
Abstract
Bayesian inference offers a rigorous framework for parameter inversion and uncertainty quantification in engineering disciplines. Despite advancements introduced by Variational Bayesian Inference (VBI), Bayesian Inverse Problems (BIPs) with implicit and non-differentiable forward solvers still face significant limitations associated with mean-field approximation, computational difficulties, poor scalability, and high-dimensional data complexities. In response to these challenges, a novel Variational Inference (VI) framework featuring equivalent neural network representation with automatic differentiation is proposed. The network architecture “VBI-Net”, comprising a variational distribution sampler, a likelihood function approximator, and a variational free energy loss function, is designed to mirror the VI framework with multivariate Gaussian variational distributions. The sampler yields posterior samples of the system model parameters and prediction errors, while incorporating the variational parameters as differentiable and explainable network parameters by reparameterization trick. The likelihood function approximator employs a neural network as a viable replacement for time-intensive and non-differentiable forward solvers, enabling efficient likelihood function evaluations. The loss function measures the goodness of the variational distribution. The seamless integration of the sampler and approximator guarantees the overall differentiability of the architecture, facilitating the utilization of automatic differentiation, gradient-based optimization methods, and enabling scalability to high-dimensional scenarios. Furthermore, the explainable neural-networked implementation scheme leverages CUDA support embedded in deep learning frameworks to inherently enable parallel computation, GPU acceleration, and optimized tensor operations. To demonstrate its efficacy, the method is applied in Bayesian model updating scenarios involving a numerical shear building and a practical structure.
期刊介绍:
Elsevier publishes Reliability Engineering & System Safety in association with the European Safety and Reliability Association and the Safety Engineering and Risk Analysis Division. The international journal is devoted to developing and applying methods to enhance the safety and reliability of complex technological systems, like nuclear power plants, chemical plants, hazardous waste facilities, space systems, offshore and maritime systems, transportation systems, constructed infrastructure, and manufacturing plants. The journal normally publishes only articles that involve the analysis of substantive problems related to the reliability of complex systems or present techniques and/or theoretical results that have a discernable relationship to the solution of such problems. An important aim is to balance academic material and practical applications.