Hope, tolerance and empathy: employees' emotions when using an AI-enabled chatbot in a digitalised workplace

Lorentsa Gkinko, Amany R. Elbanna
{"title":"Hope, tolerance and empathy: employees' emotions when using an AI-enabled chatbot in a digitalised workplace","authors":"Lorentsa Gkinko, Amany R. Elbanna","doi":"10.1108/itp-04-2021-0328","DOIUrl":null,"url":null,"abstract":"PurposeInformation Systems research on emotions in relation to using technology largely holds essentialist assumptions about emotions, focuses on negative emotions and treats technology as a token or as a black box, which hinders an in-depth understanding of distinctions in the emotional experience of using artificial intelligence (AI) technology in context. This research focuses on understanding employees' emotional experiences of using an AI chatbot as a specific type of AI system that learns from how it is used and is conversational, displaying a social presence to users. The research questions how and why employees experience emotions when using an AI chatbot, and how these emotions impact its use.Design/methodology/approachAn interpretive case study approach and an inductive analysis were adopted for this study. Data were collected through interviews, documents review and observation of use.FindingsThe study found that employee appraisals of chatbots were influenced by the form and functional design of the AI chatbot technology and its organisational and social context, resulting in a wider repertoire of appraisals and multiple emotions. In addition to positive and negative emotions, users experienced connection emotions. The findings show that the existence of multiple emotions can encourage continued use of an AI chatbot.Originality/valueThis research extends information systems literature on emotions by focusing on the lived experiences of employees in their actual use of an AI chatbot, while considering its characteristics and its organisational and social context. The findings inform the emerging literature on AI.","PeriodicalId":13533,"journal":{"name":"Inf. Technol. People","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inf. Technol. People","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/itp-04-2021-0328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

PurposeInformation Systems research on emotions in relation to using technology largely holds essentialist assumptions about emotions, focuses on negative emotions and treats technology as a token or as a black box, which hinders an in-depth understanding of distinctions in the emotional experience of using artificial intelligence (AI) technology in context. This research focuses on understanding employees' emotional experiences of using an AI chatbot as a specific type of AI system that learns from how it is used and is conversational, displaying a social presence to users. The research questions how and why employees experience emotions when using an AI chatbot, and how these emotions impact its use.Design/methodology/approachAn interpretive case study approach and an inductive analysis were adopted for this study. Data were collected through interviews, documents review and observation of use.FindingsThe study found that employee appraisals of chatbots were influenced by the form and functional design of the AI chatbot technology and its organisational and social context, resulting in a wider repertoire of appraisals and multiple emotions. In addition to positive and negative emotions, users experienced connection emotions. The findings show that the existence of multiple emotions can encourage continued use of an AI chatbot.Originality/valueThis research extends information systems literature on emotions by focusing on the lived experiences of employees in their actual use of an AI chatbot, while considering its characteristics and its organisational and social context. The findings inform the emerging literature on AI.
希望、宽容和同理心:员工在数字化工作场所使用人工智能聊天机器人时的情绪
目的:信息系统对与使用技术相关的情绪的研究在很大程度上持有关于情绪的本质主义假设,关注负面情绪,并将技术视为一种象征或黑盒子,这阻碍了对使用人工智能(AI)技术的情感体验差异的深入理解。这项研究的重点是了解员工使用人工智能聊天机器人的情感体验,因为人工智能聊天机器人是一种特定类型的人工智能系统,可以从使用方式和对话中学习,向用户展示社交存在。该研究质疑员工在使用人工智能聊天机器人时如何以及为什么会产生情绪,以及这些情绪如何影响其使用。设计/方法/方法本研究采用了解释性案例研究法和归纳分析法。通过访谈、文献查阅和使用观察等方式收集数据。研究发现,员工对聊天机器人的评价受到人工智能聊天机器人技术的形式和功能设计及其组织和社会背景的影响,从而导致更广泛的评价和多种情绪。除了积极情绪和消极情绪外,用户还体验到连接情绪。研究结果表明,多种情绪的存在可以鼓励人们继续使用人工智能聊天机器人。原创性/价值本研究通过关注员工在实际使用人工智能聊天机器人时的生活体验,同时考虑其特征及其组织和社会背景,扩展了关于情感的信息系统文献。这些发现为新兴的人工智能文献提供了信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信