{"title":"Uncertainty quantification in regression neural networks using evidential likelihood-based inference","authors":"Thierry Denœux","doi":"10.1016/j.ijar.2025.109423","DOIUrl":null,"url":null,"abstract":"<div><div>We introduce a new method for quantifying prediction uncertainty in regression neural networks using evidential likelihood-based inference. The method is based on the Gaussian approximation of the likelihood function and the linearization of the network output with respect to the weights. Prediction uncertainty is described by a random fuzzy set inducing a predictive belief function. Two models are considered: a simple one with constant conditional variance and a more complex one in which the conditional variance is predicted by an auxiliary neural network. Both models are trained by regularized log-likelihood maximization using a standard optimization algorithm. The postprocessing required for uncertainty quantification only consists of one computation and inversion of the Hessian matrix after convergence. Numerical experiments show that the approximations are quite accurate and that the method allows for conservative uncertainty-aware predictions.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"182 ","pages":"Article 109423"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Approximate Reasoning","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0888613X25000647","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Uncertainty quantification in regression neural networks using evidential likelihood-based inference
We introduce a new method for quantifying prediction uncertainty in regression neural networks using evidential likelihood-based inference. The method is based on the Gaussian approximation of the likelihood function and the linearization of the network output with respect to the weights. Prediction uncertainty is described by a random fuzzy set inducing a predictive belief function. Two models are considered: a simple one with constant conditional variance and a more complex one in which the conditional variance is predicted by an auxiliary neural network. Both models are trained by regularized log-likelihood maximization using a standard optimization algorithm. The postprocessing required for uncertainty quantification only consists of one computation and inversion of the Hessian matrix after convergence. Numerical experiments show that the approximations are quite accurate and that the method allows for conservative uncertainty-aware predictions.
期刊介绍:
The International Journal of Approximate Reasoning is intended to serve as a forum for the treatment of imprecision and uncertainty in Artificial and Computational Intelligence, covering both the foundations of uncertainty theories, and the design of intelligent systems for scientific and engineering applications. It publishes high-quality research papers describing theoretical developments or innovative applications, as well as review articles on topics of general interest.
Relevant topics include, but are not limited to, probabilistic reasoning and Bayesian networks, imprecise probabilities, random sets, belief functions (Dempster-Shafer theory), possibility theory, fuzzy sets, rough sets, decision theory, non-additive measures and integrals, qualitative reasoning about uncertainty, comparative probability orderings, game-theoretic probability, default reasoning, nonstandard logics, argumentation systems, inconsistency tolerant reasoning, elicitation techniques, philosophical foundations and psychological models of uncertain reasoning.
Domains of application for uncertain reasoning systems include risk analysis and assessment, information retrieval and database design, information fusion, machine learning, data and web mining, computer vision, image and signal processing, intelligent data analysis, statistics, multi-agent systems, etc.