End-User Confidence in Artificial Intelligence-Based Predictions Applied to Biomedical Data.

International journal of neural systems Pub Date : 2025-04-01 Epub Date: 2025-02-24 DOI:10.1142/S0129065725500170
Zvi Kam, Lorenzo Peracchio, Giovanna Nicora
{"title":"End-User Confidence in Artificial Intelligence-Based Predictions Applied to Biomedical Data.","authors":"Zvi Kam, Lorenzo Peracchio, Giovanna Nicora","doi":"10.1142/S0129065725500170","DOIUrl":null,"url":null,"abstract":"<p><p>Applications of Artificial Intelligence (AI) are revolutionizing biomedical research and healthcare by offering data-driven predictions that assist in diagnoses. Supervised learning systems are trained on large datasets to predict outcomes for new test cases. However, they typically do not provide an indication of the reliability of these predictions, even though error estimates are integral to model development. Here, we introduce a novel method to identify regions in the feature space that diverge from training data, where an AI model may perform poorly. We utilize a compact precompiled structure that allows for fast and direct access to confidence scores in real time at the point of use without requiring access to the training data or model algorithms. As a result, users can determine when to trust the AI model's outputs, while developers can identify where the model's applicability is limited. We validate our approach using simulated data and several biomedical case studies, demonstrating that our approach provides fast confidence estimates ([Formula: see text] milliseconds per case), with high concordance to previously developed methods (<i>f</i>-[Formula: see text]). These estimates can be easily added to real-world AI applications. We argue that providing confidence estimates should be a standard practice for all AI applications in public use.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"35 4","pages":"2550017"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of neural systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S0129065725500170","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/24 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Applications of Artificial Intelligence (AI) are revolutionizing biomedical research and healthcare by offering data-driven predictions that assist in diagnoses. Supervised learning systems are trained on large datasets to predict outcomes for new test cases. However, they typically do not provide an indication of the reliability of these predictions, even though error estimates are integral to model development. Here, we introduce a novel method to identify regions in the feature space that diverge from training data, where an AI model may perform poorly. We utilize a compact precompiled structure that allows for fast and direct access to confidence scores in real time at the point of use without requiring access to the training data or model algorithms. As a result, users can determine when to trust the AI model's outputs, while developers can identify where the model's applicability is limited. We validate our approach using simulated data and several biomedical case studies, demonstrating that our approach provides fast confidence estimates ([Formula: see text] milliseconds per case), with high concordance to previously developed methods (f-[Formula: see text]). These estimates can be easily added to real-world AI applications. We argue that providing confidence estimates should be a standard practice for all AI applications in public use.

最终用户对应用于生物医学数据的人工智能预测的信心。
人工智能(AI)的应用通过提供数据驱动的预测来帮助诊断,正在彻底改变生物医学研究和医疗保健。监督学习系统在大型数据集上进行训练,以预测新测试用例的结果。然而,它们通常不提供这些预测的可靠性的指示,即使误差估计是模型开发不可或缺的一部分。在这里,我们引入了一种新的方法来识别特征空间中与训练数据偏离的区域,在这些区域中,人工智能模型可能表现不佳。我们利用紧凑的预编译结构,允许在使用点快速直接访问实时置信度分数,而无需访问训练数据或模型算法。因此,用户可以决定何时信任AI模型的输出,而开发人员可以确定模型的适用性受到限制的地方。我们使用模拟数据和几个生物医学案例研究验证了我们的方法,证明我们的方法提供了快速的置信度估计([公式:见文本]毫秒每例),与先前开发的方法高度一致(f-[公式:见文本])。这些估计可以很容易地添加到现实世界的AI应用程序中。我们认为,提供置信度估计应该成为所有人工智能公共应用的标准做法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信