Sebastian Zepf, Arijit Gupta, J. Krämer, W. Minker
{"title":"EmpathicSDS","authors":"Sebastian Zepf, Arijit Gupta, J. Krämer, W. Minker","doi":"10.1145/3405755.3406125","DOIUrl":null,"url":null,"abstract":"In human-to-human conversations, showing empathy and thus understanding for the situation of the opposite party is crucial for a natural conversation. Thereby, emotional mimicry, i.e. imitating expressions of the person whom we are interacting with, is one of the basic mechanisms contributing to empathy. State-of-the-art speech dialogue systems still lack the ability of showing empathy, which limits naturalness. Thus, we present EmpathicSDS, a prototype to investigate the potential of lexical and acoustic mimicry for improving empathy in conversational interfaces. Our prototype comprises three different modes: 1.) neutral, where the system's response to a user query is static, 2.) lexical mimicry, where the wording of the user is reappraised by the system, and 3.) lexical and acoustic mimicry, which applies both lexical mimicry and a matching of the system's voice emotion to the user's emotional state. We conducted a user study with 33 participants to evaluate the effect of the mimicry approaches on user perception and to explore the role of user emotions. Our results show that lexical mimicry significantly improves perceived empathy and personalization without affecting efficiency. Acoustic mimicry can further improve naturalness in the condition of positive emotion while impairing efficiency in the negative condition.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd Conference on Conversational User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3405755.3406125","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
In human-to-human conversations, showing empathy and thus understanding for the situation of the opposite party is crucial for a natural conversation. Thereby, emotional mimicry, i.e. imitating expressions of the person whom we are interacting with, is one of the basic mechanisms contributing to empathy. State-of-the-art speech dialogue systems still lack the ability of showing empathy, which limits naturalness. Thus, we present EmpathicSDS, a prototype to investigate the potential of lexical and acoustic mimicry for improving empathy in conversational interfaces. Our prototype comprises three different modes: 1.) neutral, where the system's response to a user query is static, 2.) lexical mimicry, where the wording of the user is reappraised by the system, and 3.) lexical and acoustic mimicry, which applies both lexical mimicry and a matching of the system's voice emotion to the user's emotional state. We conducted a user study with 33 participants to evaluate the effect of the mimicry approaches on user perception and to explore the role of user emotions. Our results show that lexical mimicry significantly improves perceived empathy and personalization without affecting efficiency. Acoustic mimicry can further improve naturalness in the condition of positive emotion while impairing efficiency in the negative condition.