Trust in the system and human autonomy in customer service chatbots

Ana Gervazoni, Manuela Quaresma
{"title":"Trust in the system and human autonomy in customer service chatbots","authors":"Ana Gervazoni, Manuela Quaresma","doi":"10.3384/ecp203072","DOIUrl":null,"url":null,"abstract":"When Artificial Intelligence systems are not explained clearly to users, it can negatively affect their interactions and compromise their perceptions of a brand. When designing and developing conversational agents that deal with the client, it is crucial to consider that they are a service and follow human-centered Artificial Intelligence (HCAI) approaches. This study discusses two HCAI frameworks, relate them to trust in the system and human autonomy and define how these guidelines could be met in customer service chatbot. A survey was conducted to determine if users' views about their interactions with chatbots aligned with the recommended guidelines and how this affected their senses mentioned above. The analysis of the responses indicates that those human-centered Artificial Intelligence approaches still need to be prioritized or even met in customer service chatbot development. Users have reported unpleasant experiences with such services, leading to a decrease in their trust and autonomy.","PeriodicalId":285622,"journal":{"name":"Linköping Electronic Conference Proceedings","volume":"184 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Linköping Electronic Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3384/ecp203072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

When Artificial Intelligence systems are not explained clearly to users, it can negatively affect their interactions and compromise their perceptions of a brand. When designing and developing conversational agents that deal with the client, it is crucial to consider that they are a service and follow human-centered Artificial Intelligence (HCAI) approaches. This study discusses two HCAI frameworks, relate them to trust in the system and human autonomy and define how these guidelines could be met in customer service chatbot. A survey was conducted to determine if users' views about their interactions with chatbots aligned with the recommended guidelines and how this affected their senses mentioned above. The analysis of the responses indicates that those human-centered Artificial Intelligence approaches still need to be prioritized or even met in customer service chatbot development. Users have reported unpleasant experiences with such services, leading to a decrease in their trust and autonomy.
客户服务聊天机器人对系统的信任和人类的自主性
如果不向用户解释清楚人工智能系统,就会对他们的互动产生负面影响,并损害他们对品牌的认知。在设计和开发与客户打交道的对话式代理时,必须考虑到它们是一种服务,并遵循以人为本的人工智能(HCAI)方法。本研究讨论了两个 HCAI 框架,将它们与对系统的信任和人的自主性联系起来,并定义了如何在客户服务聊天机器人中满足这些准则。我们进行了一项调查,以确定用户与聊天机器人互动的观点是否符合推荐的准则,以及这对他们的上述感觉有何影响。对回答的分析表明,在客户服务聊天机器人开发过程中,仍需优先考虑甚至满足这些以人为本的人工智能方法。用户报告了使用此类服务的不愉快经历,导致其信任度和自主性下降。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信