对人工智能的信任和依赖--关于过度依赖人工智能的程度和代价的实验研究

IF 9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
{"title":"对人工智能的信任和依赖--关于过度依赖人工智能的程度和代价的实验研究","authors":"","doi":"10.1016/j.chb.2024.108352","DOIUrl":null,"url":null,"abstract":"<div><p>Decision-making is undergoing rapid changes due to the introduction of artificial intelligence (AI), as AI recommender systems can help mitigate human flaws and increase decision accuracy and efficiency. However, AI can also commit errors or suffer from algorithmic bias. Hence, blind trust in technologies carries risks, as users may follow detrimental advice resulting in undesired consequences. Building upon research on algorithm appreciation and trust in AI, the current study investigates whether users who receive AI advice in an uncertain situation overrely on this advice — to their own detriment and that of other parties. In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties. The results call into question how AI is being used in assisted decision making, emphasizing the importance of AI literacy and effective trust calibration for productive deployment of such systems.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":null,"pages":null},"PeriodicalIF":9.0000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0747563224002206/pdfft?md5=299a70f444225bc45709fbd6ca8a93f1&pid=1-s2.0-S0747563224002206-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI\",\"authors\":\"\",\"doi\":\"10.1016/j.chb.2024.108352\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Decision-making is undergoing rapid changes due to the introduction of artificial intelligence (AI), as AI recommender systems can help mitigate human flaws and increase decision accuracy and efficiency. However, AI can also commit errors or suffer from algorithmic bias. Hence, blind trust in technologies carries risks, as users may follow detrimental advice resulting in undesired consequences. Building upon research on algorithm appreciation and trust in AI, the current study investigates whether users who receive AI advice in an uncertain situation overrely on this advice — to their own detriment and that of other parties. In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties. The results call into question how AI is being used in assisted decision making, emphasizing the importance of AI literacy and effective trust calibration for productive deployment of such systems.</p></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2024-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0747563224002206/pdfft?md5=299a70f444225bc45709fbd6ca8a93f1&pid=1-s2.0-S0747563224002206-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0747563224002206\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563224002206","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

由于引入了人工智能(AI),决策制定正在发生迅速的变化,因为人工智能推荐系统可以帮助减少人为缺陷,提高决策的准确性和效率。然而,人工智能也可能犯错或出现算法偏差。因此,盲目信任技术会带来风险,因为用户可能会听从不利的建议,导致不良后果。本研究以人工智能中的算法鉴赏和信任研究为基础,调查了在不确定情况下接受人工智能建议的用户是否会过度依赖这些建议,从而对自己和他人造成损害。在一个独立于领域、激励和互动的行为实验中,我们发现,仅仅是知道人工智能会生成建议,就会导致人们过度依赖它,也就是说,即使人工智能的建议与现有的上下文信息以及他们自己的评估相矛盾,他们也会听从人工智能的建议。通常情况下,这种过度依赖不仅会导致被建议者的低效结果,还会对第三方造成不良影响。研究结果对人工智能在辅助决策中的应用提出了质疑,强调了人工智能素养和有效的信任校准对此类系统的有效部署的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI

Decision-making is undergoing rapid changes due to the introduction of artificial intelligence (AI), as AI recommender systems can help mitigate human flaws and increase decision accuracy and efficiency. However, AI can also commit errors or suffer from algorithmic bias. Hence, blind trust in technologies carries risks, as users may follow detrimental advice resulting in undesired consequences. Building upon research on algorithm appreciation and trust in AI, the current study investigates whether users who receive AI advice in an uncertain situation overrely on this advice — to their own detriment and that of other parties. In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties. The results call into question how AI is being used in assisted decision making, emphasizing the importance of AI literacy and effective trust calibration for productive deployment of such systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信