{"title":"When chatbots fail: exploring user coping following a chatbots-induced service failure","authors":"Ruby Wenjiao Zhang, Xiaoning Liang, Szu-Hsin Wu","doi":"10.1108/itp-08-2023-0745","DOIUrl":null,"url":null,"abstract":"PurposeWhile the proliferation of chatbots allows companies to connect with their customers in a cost- and time-efficient manner, it is not deniable that they quite often fail expectations and may even pose negative impacts on user experience. The purpose of the study is to empirically explore the negative user experience with chatbots and understand how users respond to service failure caused by chatbots.Design/methodology/approachThis study adopts a qualitative research method and conducts thematic analysis of 23 interview transcripts.FindingsIt identifies common areas where chatbots fail user expectations and cause service failure. These include their inability to comprehend and provide information, over-enquiry of personal or sensitive information, fake humanity, poor integration with human agents, and their inability to solve complicated user queries. Negative emotions such as anger, frustration, betrayal and passive defeat were experienced by participants when they interacted with chatbots. We also reveal four coping strategies users employ following a chatbots-induced failure: expressive support seeking, active coping, acceptance and withdrawal.Originality/valueOur study extends our current understanding of human-chatbot interactions and provides significant managerial implications. It highlights the importance for organizations to re-consider the role of their chatbots in user interactions and balance the use of human and chatbots in the service context, particularly in customer service interactions that involve resolving complex issues or handling non-routinized tasks.","PeriodicalId":504906,"journal":{"name":"Information Technology & People","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Technology & People","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/itp-08-2023-0745","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
PurposeWhile the proliferation of chatbots allows companies to connect with their customers in a cost- and time-efficient manner, it is not deniable that they quite often fail expectations and may even pose negative impacts on user experience. The purpose of the study is to empirically explore the negative user experience with chatbots and understand how users respond to service failure caused by chatbots.Design/methodology/approachThis study adopts a qualitative research method and conducts thematic analysis of 23 interview transcripts.FindingsIt identifies common areas where chatbots fail user expectations and cause service failure. These include their inability to comprehend and provide information, over-enquiry of personal or sensitive information, fake humanity, poor integration with human agents, and their inability to solve complicated user queries. Negative emotions such as anger, frustration, betrayal and passive defeat were experienced by participants when they interacted with chatbots. We also reveal four coping strategies users employ following a chatbots-induced failure: expressive support seeking, active coping, acceptance and withdrawal.Originality/valueOur study extends our current understanding of human-chatbot interactions and provides significant managerial implications. It highlights the importance for organizations to re-consider the role of their chatbots in user interactions and balance the use of human and chatbots in the service context, particularly in customer service interactions that involve resolving complex issues or handling non-routinized tasks.