The Impact of AI Transparency and Reliability on Human-AI Collaborative Decision-Making

Xujinfeng Wang, Yicheng Yang, Da Tao, Tingru Zhang
{"title":"The Impact of AI Transparency and Reliability on Human-AI Collaborative Decision-Making","authors":"Xujinfeng Wang, Yicheng Yang, Da Tao, Tingru Zhang","doi":"10.54941/ahfe1004203","DOIUrl":null,"url":null,"abstract":"Human-AI collaborative decision-making has become a prevalent interaction paradigm, but the lack of transparency in AI algorithms presents challenges for humans to understand the decision-making process. Such lack of comprehension can lead to issues of over-reliance or under-reliance on AI recommendations. In this study, we focused on a human-AI collaborative income predicting task and investigated the influence of AI transparency and reliability on task performance. The results revealed that when AI reliability was high (75% and 90%), transparency had no significant effects on human decision-making. However, at a lower level of reliability (60%), higher transparency levels led to increased compliance with AI suggestions, thereby demonstrating a persuasive effect. Further analysis indicated that compliance rates only improved when AI made correct decisions, rather than when AI made incorrect ones. However, transparency did not significantly impact humans' ability to correctly reject erroneous recommendations from AI, suggesting that increasing transparency alone did not enhance humans’ error detecting ability. In conclusion, when the reliability of AI is low, heightening transparency can promote appropriate dependence on AI without elevating the risk of over-reliance. Nevertheless, further research is necessary to explore effective strategies that can assist humans in identifying AI errors effectively.","PeriodicalId":470195,"journal":{"name":"AHFE international","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AHFE international","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1004203","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Human-AI collaborative decision-making has become a prevalent interaction paradigm, but the lack of transparency in AI algorithms presents challenges for humans to understand the decision-making process. Such lack of comprehension can lead to issues of over-reliance or under-reliance on AI recommendations. In this study, we focused on a human-AI collaborative income predicting task and investigated the influence of AI transparency and reliability on task performance. The results revealed that when AI reliability was high (75% and 90%), transparency had no significant effects on human decision-making. However, at a lower level of reliability (60%), higher transparency levels led to increased compliance with AI suggestions, thereby demonstrating a persuasive effect. Further analysis indicated that compliance rates only improved when AI made correct decisions, rather than when AI made incorrect ones. However, transparency did not significantly impact humans' ability to correctly reject erroneous recommendations from AI, suggesting that increasing transparency alone did not enhance humans’ error detecting ability. In conclusion, when the reliability of AI is low, heightening transparency can promote appropriate dependence on AI without elevating the risk of over-reliance. Nevertheless, further research is necessary to explore effective strategies that can assist humans in identifying AI errors effectively.
人工智能透明度和可靠性对人-人工智能协同决策的影响
人-人工智能协同决策已经成为一种普遍的交互模式,但人工智能算法缺乏透明度,这给人类理解决策过程带来了挑战。这种缺乏理解可能会导致过度依赖或不太依赖人工智能的建议。在这项研究中,我们专注于一个人类-人工智能协同收入预测任务,并研究了人工智能透明度和可靠性对任务绩效的影响。结果显示,当人工智能的可靠性很高(75%和90%)时,透明度对人类决策没有显著影响。然而,在较低的可靠性水平(60%)下,较高的透明度水平导致对人工智能建议的依从性增加,从而显示出有说服力的效果。进一步的分析表明,只有当人工智能做出正确的决策时,合规率才会提高,而当人工智能做出错误的决策时,合规率则会提高。然而,透明度并没有显著影响人类正确拒绝人工智能错误建议的能力,这表明仅仅增加透明度并不能提高人类的错误检测能力。综上所述,当人工智能的可靠性较低时,提高透明度可以促进对人工智能的适当依赖,而不会增加过度依赖的风险。然而,需要进一步的研究来探索有效的策略,帮助人类有效地识别人工智能错误。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信