Pablo Rivas, Kerstin Holzmayer, Cristian Hernandez, Charles Grippaldi
{"title":"Excitement and Concerns about Machine Learning-Based Chatbots and Talkbots: A Survey","authors":"Pablo Rivas, Kerstin Holzmayer, Cristian Hernandez, Charles Grippaldi","doi":"10.1109/ISTAS.2018.8638280","DOIUrl":null,"url":null,"abstract":"Chatbots and talkbots are intelligent programs that can establish written and oral communication with human beings, usually with the purpose of helping them achieve a specific goal. More and more companies are now implementing bots in order to reduce operational costs. Most bots use machine learning algorithms that are deployed on companies websites, cloud services, or distributed mobile systems so that customers are always able to speak with ‘someone’ to inquire about products or services. Most bots are trained using data from interactions among human beings so that they can learn speech patterns and answer questions. In this paper we present the results of an experiment designed to survey people’s perception of these bots and how much people trust them. We present a moral dilemma to the respondents and ask questions about permissiveness and assess if bots are judged and blamed differently than their human counterparts. In this paper we reveal such differences in judgement, which suggest that many people hold the chatbots to similar behavioral standards than human beings; however, bots receive blame just as humans do.","PeriodicalId":122477,"journal":{"name":"2018 IEEE International Symposium on Technology and Society (ISTAS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISTAS.2018.8638280","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Chatbots and talkbots are intelligent programs that can establish written and oral communication with human beings, usually with the purpose of helping them achieve a specific goal. More and more companies are now implementing bots in order to reduce operational costs. Most bots use machine learning algorithms that are deployed on companies websites, cloud services, or distributed mobile systems so that customers are always able to speak with ‘someone’ to inquire about products or services. Most bots are trained using data from interactions among human beings so that they can learn speech patterns and answer questions. In this paper we present the results of an experiment designed to survey people’s perception of these bots and how much people trust them. We present a moral dilemma to the respondents and ask questions about permissiveness and assess if bots are judged and blamed differently than their human counterparts. In this paper we reveal such differences in judgement, which suggest that many people hold the chatbots to similar behavioral standards than human beings; however, bots receive blame just as humans do.