{"title":"Towards Empathic Conversational Interaction","authors":"Micol Spitale, F. Garzotto","doi":"10.1145/3405755.3406146","DOIUrl":"https://doi.org/10.1145/3405755.3406146","url":null,"abstract":"In recent years, \"computational empathy\" has emerged as a new challenging research field. Computational empathy investigates how artificial agents can manifest empathic behaviours towards the user, and how they can elicit empathy during the human-agent interaction. Such \"empathic agents\" have the capacity to place themselves into the emotional position of a user (or another agent), and behave taking such emotional understanding into account. The paper explores a computational empathy approach in the context of conversational interaction, and presents an empathic conversational framework grounded on the empathy theory. The framework provides a conceptual tool for designing and evaluating empathic conversational agents. Overall, our research contributes to a deeper understanding of the role of empathy in conversational interaction.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123707018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pragmatics Research and Non-task Dialog Technology","authors":"E. Gilmartin, Christian Saam","doi":"10.1145/3405755.3406142","DOIUrl":"https://doi.org/10.1145/3405755.3406142","url":null,"abstract":"Interest is growing in dialog systems which engage users in conversations that are not entirely focussed on the immediate performance of clearly defined practical tasks. Interest is also growing in data-driven methods for dialog system design, with increasing focus on sequence-to-sequence deep learning models, inspired by success in the machine translation sphere. In this position paper, we discuss research on casual conversation or social talk and current methods in design of systems, highlighting areas which need reconciliation.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115026856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Situation-Adaptive In-Vehicle Voice Output","authors":"D. Stier, K. Munro, U. Heid, W. Minker","doi":"10.1145/3405755.3406127","DOIUrl":"https://doi.org/10.1145/3405755.3406127","url":null,"abstract":"Human-machine interaction is increasingly speech-based, with a trend away from the earlier command-based style towards natural, intuitive dialogues based on the human model. A prerequisite is the ability of a Spoken Dialogue System to flexibly react according to individual requirements, e.g., by means of adaptive voice output. The necessity to maximize the efficiency of language interaction through alignment at all linguistic levels becomes particularly relevant in dual-task situations. Here speech represents a secondary task in parallel to a prioritized primary task, such as driving a car. In addition to the individual requirements of a user, the demands of the interaction context need to be considered. For this purpose, it is beneficial to examine the particular characteristics of user language during the performance of a primary task. To this end, we conducted data collection in a driving simulator and investigated user language while driving with a focus on the syntactic level. Our results show significant differences in language use between two different driving complexity contexts, which should be taken into account in the generation of voice output. Our analyses serve as a basis for future work towards user- and situation-adaptive voice output in dual-task environments.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124861500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personalised Chats with Voice Assistants: The User Perspective","authors":"S. Völkel, Penelope Kempf, H. Hussmann","doi":"10.1145/3405755.3406156","DOIUrl":"https://doi.org/10.1145/3405755.3406156","url":null,"abstract":"Recent research suggests that adapting a voice assistant's personality to the user can improve the interaction experience. We present a pragmatic and practical approach to adapting voice assistant personality. We asked users to take the voice assistant's perspective and write their \"ideal\" voice assistant-user dialogue in different scenarios in an automotive context. Our results indicate individual differences in participants' preference for social or purely functional conversations with the voice assistant.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126885793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoonseo Choi, Hyungyu Shin, T. K. Monserrat, Nyoungwoo Lee, Jeongeon Park, Juho Kim
{"title":"Leveraging the Crowd to Support the Conversation Design Process","authors":"Yoonseo Choi, Hyungyu Shin, T. K. Monserrat, Nyoungwoo Lee, Jeongeon Park, Juho Kim","doi":"10.1145/3405755.3406155","DOIUrl":"https://doi.org/10.1145/3405755.3406155","url":null,"abstract":"Building a chatbot with human-like conversation capabilities is essential for users to feel more natural in task completion. Many designers try to collect human conversation data and apply them into a chatbot conversation, aiming that it could work like a human conversation. To support conversation design, we propose the idea of inviting the crowd into the design process, where crowd workers contribute to improving the designed conversation. To explore this idea, we developed ProtoChat, a prototype system that supports a conversation design process by (1) allowing the crowd to actively suggest new utterances based on designers' pre-written design and (2) visually representing crowdsourced conversation data so that designers can analyze and improve their conversation design. Results of an exploratory study indicated that the crowd is helpful in providing insights and ideas as designers explore the design space.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121066102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Yuan, Birgit Brüggemeier, Stefan Hillmann, Thilo Michael
{"title":"User Preference and Categories for Error Responses in Conversational User Interfaces","authors":"S. Yuan, Birgit Brüggemeier, Stefan Hillmann, Thilo Michael","doi":"10.1145/3405755.3406126","DOIUrl":"https://doi.org/10.1145/3405755.3406126","url":null,"abstract":"Error messages are frequent in interactions with Conversational User Interfaces (CUI). Smart speakers respond to about every third user request with an error message. Errors can heavily affect user experience (UX) in interaction with CUI. However, there is limited research on how error responses should be formulated. In this paper, we present a system to study how people classify different categories (acknowledgement of user sentiment, acknowledgement of error and apology) of error messages, and evaluate peoples' preference of error responses with clear categories. The results indicate that if an error response has only one element (i.e. neutral acknowledgement of error, apology or sentiment), responses that acknowledge errors neutrally are preferred by participants. Moreover, we find that when interviewed, participants like error messages to include an apology, an explanation of what went wrong, and a suggestion how to fix the problem in addition to a neutral acknowledgement of an error. Our study has two main contributions: (1) our results inform about the design of error messages and (2) we present a framework for error response categorization and validation.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116093849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Differences between Heavy Users and Light Users in Difficulties with Voice User Interfaces","authors":"Hyunhoon Jung, Hyeji Kim, Jung-Woo Ha","doi":"10.1145/3405755.3406170","DOIUrl":"https://doi.org/10.1145/3405755.3406170","url":null,"abstract":"Voice user interfaces (VUIs) are growing in popularity. At this stage of VUIs adoption, distinctions between heavy users and light users are becoming emerging challenge. Some studies have focused on investigating how general users interact with VUIs; however few studies have focused solely on the differences in VUIs use between heavy and light users. In this paper, we conduct user study using our new restaurant reservation VUI, AiCall, to explore what kind of difficulties those two groups are facing and what are the differences between them. We found out that 1) heavy users could identify more diverse difficulty types than light users; 2) the types of difficulties that affect each group of users are different, and 3) in particular, the repetition of agent utterance was considered the most inconvenient by heavy users. Based on these findings, we discuss the VUI design and development considerations to satisfy both groups of users.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"5 24","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113963357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conversational User Interfaces on Mobile Devices: Survey","authors":"Razan N. Jaber, Donald Mcmillan","doi":"10.1145/3405755.3406130","DOIUrl":"https://doi.org/10.1145/3405755.3406130","url":null,"abstract":"Conversational User Interfaces (CUI) on mobile devices are the most accessible and widespread examples of voice-based interaction in the wild. This paper presents a survey of mobile conversation user interface research since the commercial deployment of Apple's Siri, the first readily available consumer CUI. We present and discuss Text Entry & Typing, Application Control, Speech Analysis, Conversational Agents, Spoken Output, & Probes as the prevalent themes of research in this area. We also discuss this body of work in relation to the domains of Health & Well-being, Education, Games, and Transportation. We conclude this paper with a discussion on Multi-modal CUIs, Conversational Repair, and the implications for CUIs of greater access to the context of use.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130324948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Case for a Voice-Internet: Voice Before Conversation","authors":"J. Zimmerman","doi":"10.1145/3405755.3406149","DOIUrl":"https://doi.org/10.1145/3405755.3406149","url":null,"abstract":"This position paper makes the case for constructing a voice-Internet; applications that run on voice activated personal agents (VAPAs) and that complement apps people run on their smartphones. I am promoting this position because screen readers will never really work. The content that screen readers read was designed to be seen, not heard. Text content relies on visual semantics for communication, it offers readers the ability to skim. Neither work with audio. This paper describes two studies that explore how VAPAs both do and do not meet the needs of current screen reader users. It then describes why now might be the perfect time to create a voice-Internet, noting that a technical platform already exists, that natural language processing technology is not yet ready for real conversation, and that enterprises show some willingness to create a new communication channel for their customers.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132196849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Apples and Oranges: A framework for the usability evaluation of voice vs graphical user interfaces: A command-event based method proposal","authors":"Fermin Chavez-Sanchez, Lucila Mercado Colin","doi":"10.1145/3405755.3406169","DOIUrl":"https://doi.org/10.1145/3405755.3406169","url":null,"abstract":"This poster paper presents a proposal for the usability evaluation contrasting a VUI with a GUI. The proposal was pilot tested with six users performing six tasks. The setting was controlled, and sessions were video-recorded for the extraction and analysis of indicators which were cross-validated with subjective user data. Results suggest that the framework is effective for the conditions given and viable for further development.","PeriodicalId":380130,"journal":{"name":"Proceedings of the 2nd Conference on Conversational User Interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116828219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}