Trust through words: The systemize-empathize-effect of language in task-oriented conversational agents

IF 9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Sabine Brunswicker , Yifan Zhang , Christopher Rashidian , Daniel W. Linna Jr.
{"title":"Trust through words: The systemize-empathize-effect of language in task-oriented conversational agents","authors":"Sabine Brunswicker ,&nbsp;Yifan Zhang ,&nbsp;Christopher Rashidian ,&nbsp;Daniel W. Linna Jr.","doi":"10.1016/j.chb.2024.108516","DOIUrl":null,"url":null,"abstract":"<div><div>Anthropomorphic design has received increasing interest in research on conversational agents (CAs) and artificial intelligence (AI). Research suggests that the design of the agents’ language impacts trust and cognitive load by making the agent more “human-like”. This research seeks to understand the impacts and limits of two dimensions of language-focused anthropomorphism — the agent’s ability to <em>empathize</em> and signal the effort to engage with the users’ feelings through language structure, and the agent’s effort to <em>systemize</em> and take agency to drive the conversation using logic. We advance existing Theories of Mind (ToMs) with linguistic empathy theory to explain how language structure and logic used during the conversation impact two dimensions of system trust and cognitive load through <em>systemizing</em> and <em>empathizing</em>. We conducted a behavioral online experiment involving 277 residents who interacted with one of three online systems, varying in their interfaces’ Systemizing–Empathizing capability: A menu-based interface (MUI) (no Systemizing Ability), a non-empathetic chatbot, and an empathetic chatbot (both with Systemizing Ability and Empathizing Ability). Half of the participants were emotionally disturbed to examine the moderating effects of anger. Our results revealed that systemizing, exhibited by both chatbots, lowers cognitive effort. The ability to empathize through language increased perceived helpfulness. While the empathetic chatbot was generally perceived as more trustworthy, this effect was reversed when users experienced anger: There is an <em>uncanny valley effect</em>, where empathizing through words has its limits. These findings advance research on anthropomorphism design and trust in CAs.</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"165 ","pages":"Article 108516"},"PeriodicalIF":9.0000,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563224003844","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Anthropomorphic design has received increasing interest in research on conversational agents (CAs) and artificial intelligence (AI). Research suggests that the design of the agents’ language impacts trust and cognitive load by making the agent more “human-like”. This research seeks to understand the impacts and limits of two dimensions of language-focused anthropomorphism — the agent’s ability to empathize and signal the effort to engage with the users’ feelings through language structure, and the agent’s effort to systemize and take agency to drive the conversation using logic. We advance existing Theories of Mind (ToMs) with linguistic empathy theory to explain how language structure and logic used during the conversation impact two dimensions of system trust and cognitive load through systemizing and empathizing. We conducted a behavioral online experiment involving 277 residents who interacted with one of three online systems, varying in their interfaces’ Systemizing–Empathizing capability: A menu-based interface (MUI) (no Systemizing Ability), a non-empathetic chatbot, and an empathetic chatbot (both with Systemizing Ability and Empathizing Ability). Half of the participants were emotionally disturbed to examine the moderating effects of anger. Our results revealed that systemizing, exhibited by both chatbots, lowers cognitive effort. The ability to empathize through language increased perceived helpfulness. While the empathetic chatbot was generally perceived as more trustworthy, this effect was reversed when users experienced anger: There is an uncanny valley effect, where empathizing through words has its limits. These findings advance research on anthropomorphism design and trust in CAs.
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信