Framing responsibility: Human and AI agent effects on apology effectiveness in service failures

IF 8.9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Computers in Human Behavior Pub Date : 2026-06-01 Epub Date: 2026-02-02 DOI:10.1016/j.chb.2026.108931
Jihyun Soh, Eunice Kim
{"title":"Framing responsibility: Human and AI agent effects on apology effectiveness in service failures","authors":"Jihyun Soh,&nbsp;Eunice Kim","doi":"10.1016/j.chb.2026.108931","DOIUrl":null,"url":null,"abstract":"<div><div>As artificial intelligence (AI) systems become increasingly prevalent in service interactions, understanding how people assign responsibility and respond to apologies from AI versus human agents is critical for designing effective communication strategies. This research examines how the type of service agent (human vs. AI), the nature of a crisis (value-based vs. performance-based), and attribution strategy (internal vs. external) jointly shape individuals’ perceptions and evaluations of crisis responses. Across two experimental studies, we show that people interpret the moral and functional accountability of agents differently depending on the type of failure and the perceived capacity of the agent. In Study 1, value-based crises elicited stronger negative reactions when a human agent was involved, whereas AI agents were evaluated more harshly in performance-based failures. Study 2 introduces attribution strategy as a moderator and reveals that the effectiveness of an apology hinges on the congruence between agent type, crisis type, and attribution framing. Internal attributions were more effective for human agents in value-related crises and for chatbot agents in performance-related ones, while external attributions were more acceptable in contexts where the agent was not perceived to bear moral or functional responsibility. These findings apply attribution theory to the context of AI-mediated service crises by highlighting agent–crisis–attribution fit as a key determinant of apology effectiveness, with implications for apology design, organizational accountability, and the future of human-machine communication in digital service environments.</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"179 ","pages":"Article 108931"},"PeriodicalIF":8.9000,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563226000282","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/2/2 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

As artificial intelligence (AI) systems become increasingly prevalent in service interactions, understanding how people assign responsibility and respond to apologies from AI versus human agents is critical for designing effective communication strategies. This research examines how the type of service agent (human vs. AI), the nature of a crisis (value-based vs. performance-based), and attribution strategy (internal vs. external) jointly shape individuals’ perceptions and evaluations of crisis responses. Across two experimental studies, we show that people interpret the moral and functional accountability of agents differently depending on the type of failure and the perceived capacity of the agent. In Study 1, value-based crises elicited stronger negative reactions when a human agent was involved, whereas AI agents were evaluated more harshly in performance-based failures. Study 2 introduces attribution strategy as a moderator and reveals that the effectiveness of an apology hinges on the congruence between agent type, crisis type, and attribution framing. Internal attributions were more effective for human agents in value-related crises and for chatbot agents in performance-related ones, while external attributions were more acceptable in contexts where the agent was not perceived to bear moral or functional responsibility. These findings apply attribution theory to the context of AI-mediated service crises by highlighting agent–crisis–attribution fit as a key determinant of apology effectiveness, with implications for apology design, organizational accountability, and the future of human-machine communication in digital service environments.
框架责任:人类和人工智能代理对服务故障道歉有效性的影响
随着人工智能(AI)系统在服务交互中变得越来越普遍,了解人们如何分配责任并回应人工智能与人类代理的道歉对于设计有效的沟通策略至关重要。本研究考察了服务代理的类型(人类vs.人工智能)、危机的性质(基于价值vs.基于绩效)和归因策略(内部vs.外部)如何共同影响个人对危机应对的感知和评估。在两项实验研究中,我们表明,人们根据失败的类型和代理的感知能力不同,解释了代理的道德和功能责任。在研究1中,当涉及人类代理时,基于价值的危机引发了更强烈的负面反应,而在基于绩效的失败中,人工智能代理受到了更严厉的评估。研究2引入归因策略对道歉的调节作用,发现道歉的有效性取决于行为主体类型、危机类型和归因框架三者之间的一致性。在与价值相关的危机中,内部归因对人类代理更有效,在与绩效相关的危机中,对聊天机器人代理更有效,而在代理不被认为承担道德或功能责任的情况下,外部归因更容易被接受。这些研究结果将归因理论应用于人工智能介导的服务危机,强调了代理-危机-归因契合度是道歉有效性的关键决定因素,对道歉设计、组织问责制和数字服务环境中人机沟通的未来具有启示意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书