{"title":"Multimodal Cues of the Sense of Presence and Co-presence in Human-Virtual Agent Interaction","authors":"M. Ochs, Jeremie Bousquet, P. Blache","doi":"10.1145/3308532.3329438","DOIUrl":"https://doi.org/10.1145/3308532.3329438","url":null,"abstract":"A key challenge when studying human-agent interaction is the evaluation of user's experience. In virtual reality, this question is addressed by studying the sense of \"presence'' and\"co-presence'', generally assessed thanks to well-grounded subjective post-experience questionnaires. In this article, we aim at exploring behavioral measures of presence and co-presence by analyzing multimodal cues produced during an interaction both by the user and the virtual agent. In our study, we started from a corpus of human-agent interaction collected in a task-oriented context: a virtual environment aiming at training doctors to break bad news to a patient (played by a virtual agent). Based on this corpus, we have used machine learning algorithms to explore the possibility of predicting user's sense of presence and co-presence. In particular, we have applied and compared two techniques, Random forest and SVM, both showing very good results in predicting the level of presence and co-presence.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rens Hoegen, Deepali Aneja, Daniel J. McDuff, M. Czerwinski
{"title":"An End-to-End Conversational Style Matching Agent","authors":"Rens Hoegen, Deepali Aneja, Daniel J. McDuff, M. Czerwinski","doi":"10.1145/3308532.3329473","DOIUrl":"https://doi.org/10.1145/3308532.3329473","url":null,"abstract":"We present an end-to-end voice-based conversational agent that is able to engage in naturalistic multi-turn dialogue and align with the interlocutor's conversational style. The system uses a series of deep neural network components for speech recognition, dialogue generation, prosodic analysis and speech synthesis to generate language and prosodic expression with qualities that match those of the user. We conducted a user study (N=30) in which participants talked with the agent for 15 to 20 minutes, resulting in over 8 hours of natural interaction data. Users with high consideration conversational styles reported the agent to be more trustworthy when it matched their conversational style. Whereas, users with high involvement conversational styles were indifferent. Finally, we provide design guidelines for multi-turn dialogue interactions using conversational style adaptation.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115467471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taras Kucherenko, Dai Hasegawa, G. Henter, Naoshi Kaneko, H. Kjellström
{"title":"Analyzing Input and Output Representations for Speech-Driven Gesture Generation","authors":"Taras Kucherenko, Dai Hasegawa, G. Henter, Naoshi Kaneko, H. Kjellström","doi":"10.1145/3308532.3329472","DOIUrl":"https://doi.org/10.1145/3308532.3329472","url":null,"abstract":"This paper presents a novel framework for automatic speech-driven gesture generation, applicable to human-agent interaction including both virtual agents and robots. Specifically, we extend recent deep-learning-based, data-driven methods for speech-driven gesture generation by incorporating representation learning. Our model takes speech as input and produces gestures as output, in the form of a sequence of 3D coordinates. Our approach consists of two steps. First, we learn a lower-dimensional representation of human motion using a denoising autoencoder neural network, consisting of a motion encoder MotionE and a motion decoder MotionD. The learned representation preserves the most important aspects of the human pose variation while removing less relevant variation. Second, we train a novel encoder network SpeechE to map from speech to a corresponding motion representation with reduced dimensionality. At test time, the speech encoder and the motion decoder networks are combined: SpeechE predicts motion representations based on a given speech signal and MotionD then decodes these representations to produce motion sequences. We evaluate different representation sizes in order to find the most effective dimensionality for the representation. We also evaluate the effects of using different speech features as input to the model. We find that mel-frequency cepstral coefficients (MFCCs), alone or combined with prosodic features, perform the best. The results of a subsequent user study confirm the benefits of the representation learning.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133074146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}