When AI Fails, Who Do We Blame? Attributing Responsibility in Human–AI Interactions

Jordan Richard Schoenherr;Robert Thomson
{"title":"When AI Fails, Who Do We Blame? Attributing Responsibility in Human–AI Interactions","authors":"Jordan Richard Schoenherr;Robert Thomson","doi":"10.1109/TTS.2024.3370095","DOIUrl":null,"url":null,"abstract":"While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants’ cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"61-70"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10457538/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants’ cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed.
当人工智能失败时,我们该怪谁?人与人工智能互动中的责任归属
以往对人工智能信任度的研究主要集中在用户感知的信任度上,而本文则研究了外部代理(如审计师)如何分配责任、感知可信度以及解释人工智能的成功与失败。在两项实验中,参与者(大学生)回顾了有关自动化失败的情景,并分配了感知责任、可信度和首选解释类型。参与者对三个代理(操作员、开发人员和人工智能)的累计责任评级超过了 100%,这意味着参与者并非完全理性地归因于信任,对人工智能的信任可能代表了对人类软件开发人员的信任。责任感与可信度之间的分离表明,参与者使用了不同的线索,技术类型和感知到的自主性会影响判断。最后,我们还发现,根据人工智能的成功或失败,用于理解情况的解释类型也有所不同。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信