Combining uncertainty information with AI recommendations supports calibration with domain knowledge

IF 2.4 4区 管理学 Q1 SOCIAL SCIENCES, INTERDISCIPLINARY
Harishankar Vasudevanallur Subramanian, Casey Canfield, Daniel B. Shank, Matthew Kinnison
{"title":"Combining uncertainty information with AI recommendations supports calibration with domain knowledge","authors":"Harishankar Vasudevanallur Subramanian, Casey Canfield, Daniel B. Shank, Matthew Kinnison","doi":"10.1080/13669877.2023.2259406","DOIUrl":null,"url":null,"abstract":"AbstractThe use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.Keywords: Overconfidenceartificial intelligenceuncertaintyhuman-AI teamsrisk communication AcknowledgmentsWe thank Cihan Dagli, Krista Lentine, Mark Schnitzler, and Henry Randall for their insights on the design of AI decision support systems.Disclosure statementThe authors report that there are no competing interests to declare.Additional informationFundingThis work was supported by a National Science Foundation Award #2026324.","PeriodicalId":16975,"journal":{"name":"Journal of Risk Research","volume":"123 1","pages":"0"},"PeriodicalIF":2.4000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Risk Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13669877.2023.2259406","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

AbstractThe use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations.Keywords: Overconfidenceartificial intelligenceuncertaintyhuman-AI teamsrisk communication AcknowledgmentsWe thank Cihan Dagli, Krista Lentine, Mark Schnitzler, and Henry Randall for their insights on the design of AI decision support systems.Disclosure statementThe authors report that there are no competing interests to declare.Additional informationFundingThis work was supported by a National Science Foundation Award #2026324.
将不确定度信息与人工智能建议相结合,支持使用领域知识进行校准
摘要人工智能(AI)决策支持在医疗、国防和金融等高风险环境中的应用越来越多。不确定性信息可以帮助用户更好地利用人工智能预测,特别是当与他们的领域知识相结合时。我们用在线样本进行了一项人体实验,以检验用人工智能推荐呈现不确定性信息的效果。实验刺激和任务,包括识别植物和动物图像,来自现有的图像识别深度学习模型,这是一种流行的人工智能方法。不确定性信息是每个标签是否为真实标签的预测概率。这些信息以数字和视觉方式呈现。在这项研究中,我们测试了人工智能推荐在主题内比较中的效果,以及在主题间比较中的不确定性信息。结果表明,人工智能建议提高了参与者的准确性和信心。此外,提供不确定性信息显著提高了准确性,但没有提高信心,这表明它可能对减少过度自信有效。在这项任务中,基于自我报告的领域知识测量,参与者倾向于对动物的领域知识比植物的领域知识高。当提供不确定性信息时,具有更多领域知识的参与者适当地降低了信心。这表明人们使用人工智能和不确定性信息的方式不同,比如专家和第二意见,这取决于他们的领域知识水平。这些结果表明,如果呈现得当,不确定性信息可以潜在地减少使用人工智能推荐引起的过度自信。我们感谢Cihan Dagli、Krista Lentine、Mark Schnitzler和Henry Randall对人工智能决策支持系统设计的见解。披露声明作者报告无竞争利益需要申报。本工作得到了国家科学基金奖#2026324的支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Risk Research
Journal of Risk Research SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
12.20
自引率
5.90%
发文量
44
期刊介绍: The Journal of Risk Research is an international journal that publishes peer-reviewed theoretical and empirical research articles within the risk field from the areas of social, physical and health sciences and engineering, as well as articles related to decision making, regulation and policy issues in all disciplines. Articles will be published in English. The main aims of the Journal of Risk Research are to stimulate intellectual debate, to promote better risk management practices and to contribute to the development of risk management methodologies. Journal of Risk Research is the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信