Michael Bonfert, Maximilian Spliethöver, Roman Arzaroli, Marvin Lange, Martin Hanci, R. Porzel
{"title":"If You Ask Nicely: A Digital Assistant Rebuking Impolite Voice Commands","authors":"Michael Bonfert, Maximilian Spliethöver, Roman Arzaroli, Marvin Lange, Martin Hanci, R. Porzel","doi":"10.1145/3242969.3242995","DOIUrl":null,"url":null,"abstract":"Digital home assistants have an increasing influence on our everyday lives. The media now reports how children adapt the consequential, imperious language style when talking to real people. As a response to this behavior, we considered a digital assistant rebuking impolite language. We then investigated how adult users react when being rebuked by the AI. In a between-group study (N = 20), the participants were being rejected by our fictional speech assistant \"Eliza\" when they made impolite requests. As a result, we observed more polite behavior. Most test subjects accepted the AI's demand and said \"please\" significantly more often. However, many participants retrospectively denied Eliza the entitlement to politeness and criticized her attitude or refusal of service.","PeriodicalId":308751,"journal":{"name":"Proceedings of the 20th ACM International Conference on Multimodal Interaction","volume":"76 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th ACM International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3242969.3242995","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22
Abstract
Digital home assistants have an increasing influence on our everyday lives. The media now reports how children adapt the consequential, imperious language style when talking to real people. As a response to this behavior, we considered a digital assistant rebuking impolite language. We then investigated how adult users react when being rebuked by the AI. In a between-group study (N = 20), the participants were being rejected by our fictional speech assistant "Eliza" when they made impolite requests. As a result, we observed more polite behavior. Most test subjects accepted the AI's demand and said "please" significantly more often. However, many participants retrospectively denied Eliza the entitlement to politeness and criticized her attitude or refusal of service.