Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.

IF 7.7
PLOS digital health Pub Date : 2025-07-29 eCollection Date: 2025-07-01 DOI:10.1371/journal.pdig.0000801
Malte Blattmann, Adrian Lindenmeyer, Stefan Franke, Thomas Neumuth, Daniel Schneider
{"title":"Implicit versus explicit Bayesian priors for epistemic uncertainty estimation in clinical decision support.","authors":"Malte Blattmann, Adrian Lindenmeyer, Stefan Franke, Thomas Neumuth, Daniel Schneider","doi":"10.1371/journal.pdig.0000801","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs even in regions where the model has little evidence. This shortcoming highlights the need for decision-support systems that quantify and communicate per-query epistemic (knowledge) uncertainty. Approximate Bayesian deep learning methods address this need by introducing principled uncertainty estimates over the model's function. In this work, we compare three such methods on the task of predicting prostate cancer-specific mortality for treatment planning, using data from the PLCO cancer screening trial. All approaches achieve strong discriminative performance (AUROC = 0.86) and produce well-calibrated probabilities in-distribution, yet they differ markedly in the fidelity of their epistemic uncertainty estimates. We show that implicit functional-prior methods-specifically neural network ensembles and factorized weight prior variational Bayesian neural networks-exhibit reduced fidelity when approximating the posterior distribution and yield systematically biased estimates of epistemic uncertainty. By contrast, models employing explicitly defined, distance-aware priors-such as spectral-normalized neural Gaussian processes (SNGP)-provide more accurate posterior approximations and more reliable uncertainty quantification. These properties make explicitly distance-aware architectures particularly promising for building trustworthy clinical decision-support tools.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"4 7","pages":"e0000801"},"PeriodicalIF":7.7000,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306758/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/7/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning models offer transformative potential for personalized medicine by providing automated, data-driven support for complex clinical decision-making. However, their reliability degrades on out-of-distribution inputs, and traditional point-estimate predictors can give overconfident outputs even in regions where the model has little evidence. This shortcoming highlights the need for decision-support systems that quantify and communicate per-query epistemic (knowledge) uncertainty. Approximate Bayesian deep learning methods address this need by introducing principled uncertainty estimates over the model's function. In this work, we compare three such methods on the task of predicting prostate cancer-specific mortality for treatment planning, using data from the PLCO cancer screening trial. All approaches achieve strong discriminative performance (AUROC = 0.86) and produce well-calibrated probabilities in-distribution, yet they differ markedly in the fidelity of their epistemic uncertainty estimates. We show that implicit functional-prior methods-specifically neural network ensembles and factorized weight prior variational Bayesian neural networks-exhibit reduced fidelity when approximating the posterior distribution and yield systematically biased estimates of epistemic uncertainty. By contrast, models employing explicitly defined, distance-aware priors-such as spectral-normalized neural Gaussian processes (SNGP)-provide more accurate posterior approximations and more reliable uncertainty quantification. These properties make explicitly distance-aware architectures particularly promising for building trustworthy clinical decision-support tools.

Abstract Image

Abstract Image

Abstract Image

临床决策支持中认知不确定性估计的隐式贝叶斯先验与显式贝叶斯先验。
深度学习模型通过为复杂的临床决策提供自动化、数据驱动的支持,为个性化医疗提供了变革性的潜力。然而,它们的可靠性在分布外的输入上下降,传统的点估计预测器即使在模型几乎没有证据的区域也会给出过度自信的输出。这一缺点突出了对决策支持系统的需求,该系统需要量化和传达每个查询的认知(知识)不确定性。近似贝叶斯深度学习方法通过引入模型函数的原则不确定性估计来解决这一需求。在这项工作中,我们使用来自PLCO癌症筛查试验的数据,比较了三种预测前列腺癌特异性死亡率的方法,以制定治疗计划。所有方法都实现了很强的判别性能(AUROC = 0.86),并产生了校准良好的分布概率,但它们在认知不确定性估计的保真度方面存在显著差异。我们表明,隐式函数先验方法——特别是神经网络集成和分解权重先验变分贝叶斯神经网络——在近似后验分布时表现出较低的保真度,并产生对认知不确定性的系统偏差估计。相比之下,采用明确定义的距离感知先验的模型(如频谱归一化神经高斯过程(SNGP))提供了更准确的后验近似和更可靠的不确定性量化。这些特性使得明确的距离感知架构对于构建值得信赖的临床决策支持工具特别有希望。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信