医疗聊天机器人中的人工同理心:感觉真实吗?

Lennart Seitz
{"title":"医疗聊天机器人中的人工同理心:感觉真实吗?","authors":"Lennart Seitz","doi":"10.1016/j.chbah.2024.100067","DOIUrl":null,"url":null,"abstract":"<div><p>Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of <em>empathetic</em> (feeling with), <em>sympathetic</em> (feeling for), and <em>behavioral-empathetic</em> (empathetic helping) vs. <em>non-empathetic</em> responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, <em>empathetic,</em> and <em>sympathetic</em> responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100067"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000276/pdfft?md5=0d321010e61e06e55e950fbc8ca81fa2&pid=1-s2.0-S2949882124000276-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Artificial empathy in healthcare chatbots: Does it feel authentic?\",\"authors\":\"Lennart Seitz\",\"doi\":\"10.1016/j.chbah.2024.100067\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of <em>empathetic</em> (feeling with), <em>sympathetic</em> (feeling for), and <em>behavioral-empathetic</em> (empathetic helping) vs. <em>non-empathetic</em> responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, <em>empathetic,</em> and <em>sympathetic</em> responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.</p></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"2 1\",\"pages\":\"Article 100067\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949882124000276/pdfft?md5=0d321010e61e06e55e950fbc8ca81fa2&pid=1-s2.0-S2949882124000276-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882124000276\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882124000276","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在医疗保健聊天机器人中实施移情被认为有望创造一种人类的温暖感。然而,现有研究往往忽视了移情的多维性,导致人们对人工移情是否与人际移情具有相似感知的认识不足。本文认为,实施体验式移情表达可能会产生意想不到的负面影响,因为它们可能让人感觉不真实。相反,提供工具性支持可能更适合人工共情建模,因为它更符合计算机对聊天机器人的模式。利用医疗聊天机器人进行的两项实验研究考察了移情(与之共感)、同情(为之共感)和行为移情(移情帮助)与非移情反应对感知温暖度、感知真实性的影响,以及它们对信任和使用意图的影响。结果显示,任何形式的移情(与无移情相比)都会增强温暖感知,从而提高信任度和使用意愿。正如假设的那样,在这两项研究中,移情和同情反应会降低聊天机器人的感知真实性,从而抑制这种积极效果。第三项研究并没有在人与人的互动中复制这种反作用。因此,这项研究强调,移情并不同样适用于人与机器人的互动。它进一步引入了 "感知真实性 "的概念,并证明了在与聊天机器人的互动中,明显的人类属性可能会因为感觉不真实而产生反作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial empathy in healthcare chatbots: Does it feel authentic?

Implementing empathy to healthcare chatbots is considered promising to create a sense of human warmth. However, existing research frequently overlooks the multidimensionality of empathy, leading to an insufficient understanding if artificial empathy is perceived similarly to interpersonal empathy. This paper argues that implementing experiential expressions of empathy may have unintended negative consequences as they might feel inauthentic. Instead, providing instrumental support could be more suitable for modeling artificial empathy as it aligns better with computer-like schemas towards chatbots. Two experimental studies using healthcare chatbots examine the effect of empathetic (feeling with), sympathetic (feeling for), and behavioral-empathetic (empathetic helping) vs. non-empathetic responses on perceived warmth, perceived authenticity, and their consequences on trust and using intentions. Results reveal that any kind of empathy (vs. no empathy) enhances perceived warmth resulting in higher trust and using intentions. As hypothesized, empathetic, and sympathetic responses reduce the chatbot's perceived authenticity suppressing this positive effect in both studies. A third study does not replicate this backfiring effect in human-human interactions. This research thus highlights that empathy does not equally apply to human-bot interactions. It further introduces the concept of ‘perceived authenticity’ and demonstrates that distinctively human attributes might backfire by feeling inauthentic in interactions with chatbots.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信