{"title":"Evaluating Temporal Predictive Features for Virtual Patients Feedbacks","authors":"B. Penteado, M. Ochs, R. Bertrand, P. Blache","doi":"10.1145/3308532.3329426","DOIUrl":"https://doi.org/10.1145/3308532.3329426","url":null,"abstract":"In the intelligent virtual agent domain, several machine learning models have been proposed to automatically determine the feedbacks of virtual agents during an interaction, using human-human interaction datasets as training corpora and most commonly based on verbal and prosodic features citeMorency2010, Truong2010a. These approaches suppose an accurate system to automatically recognize speech and prosody. That makes the overall model's performance dependent on the individual performances of speech and prosody recognizers. As a consequence, one challenge remains to identify features that could be easily and accurately recognized during a human-machine interaction for predicting virtual agents' feedbacks in real time.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122294821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bringing Video Game Characters into the Real World on a Holographic Light Field Display","authors":"J. Arroyo-Palacios, Mahdi Azmandian, Steven Osman","doi":"10.1145/3308532.3329423","DOIUrl":"https://doi.org/10.1145/3308532.3329423","url":null,"abstract":"This paper discusses the design and technical choices of a proof of concept migrating video game character that can move from a game environment to a holographic environment rendered on a novel holographic light field display. We pair these two environments with interactions that are consistent to each, using a game controller for interaction in the game environment and voice, gesture and face tracking in the holographic environment. Finally, we carried out a pilot study to assess the level of social presence, consistent migration and coherent experience in our proposed system.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127850422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anne-Sophie Milcent, Erik Geslin, Abdelmajid Kadri, S. Richir
{"title":"Expressive Virtual Human: Impact of expressive wrinkles and pupillary size on emotion recognition","authors":"Anne-Sophie Milcent, Erik Geslin, Abdelmajid Kadri, S. Richir","doi":"10.1145/3308532.3329446","DOIUrl":"https://doi.org/10.1145/3308532.3329446","url":null,"abstract":"Improving the expressiveness of virtual humans is essential for qualitative interactions and development of an emotional bond. It is certainly indicated for all applications using the user's cognitive processes, such as applications dedicated to training or health. Our study aims to contribute to the design of an expressive virtual human, by identifying and adapting visual factors promoting transcription of emotions. In this paper, we investigate the effect of expressive wrinkles and variation of pupil size. We propose to compare the recognition of basic emotions on a real human and on an expressive virtual human. The virtual human was subject to two different factors: expressive wrinkles and/or pupil size. Our results indicate that emotion recognition rates on the virtual agent are high. Moreover, expressive wrinkles affect emotion recognition. The effect of pupillary size is less significant. However, both are recommended to design an expressive virtual human.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127745792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Everlyne Kimani, Ameneh Shamekhi, Prasanth Murali, Dhaval Parmar, T. Bickmore
{"title":"Stagecraft for Scientists: Exploring Novel Interaction Formats for Virtual Co-Presenter Agents","authors":"Everlyne Kimani, Ameneh Shamekhi, Prasanth Murali, Dhaval Parmar, T. Bickmore","doi":"10.1145/3308532.3329437","DOIUrl":"https://doi.org/10.1145/3308532.3329437","url":null,"abstract":"Our research explores the development of new interaction formats for oral presentations that leverage a life-sized virtual agent that co-delivers a scientific talk with a human presenter. We developed a taxonomy of 36 novel interaction formats as well as 37 roles the agent can take on in co-presentations. We evaluated the impact of these formats and roles by selecting 10 from the taxonomy and recording brief presentations on the same topic using the different formats. Judges ranked dynamic agent roles higher on engagement and rated non-standard interaction formats no lower on appropriateness, compared to standard turn-taking co-presentations.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123919234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Gebhard, T. Schneeberger, Michael Dietz, E. André, N. Bajwa
{"title":"Designing a Mobile Social and Vocational Reintegration Assistant for Burn-out Outpatient Treatment","authors":"Patrick Gebhard, T. Schneeberger, Michael Dietz, E. André, N. Bajwa","doi":"10.1145/3308532.3329460","DOIUrl":"https://doi.org/10.1145/3308532.3329460","url":null,"abstract":"Using Social Agents as health-care assistants or trainers is one focus area of IVA research. This paper presents a concept of our mobile Social Agent EmmA in the role of a vocational reintegration assistant for burn-out outpatient treatment. We follow a typical par- ticipatory design approach including experts and patients in order to address requirements from both sides. Since the success of such treatments is related to a patients emotion regulation capabilities, we employ a real-time social signal interpretation together with a computational simulation of emotion regulation that influences the agent's social behavior as well as the situational selection of verbal treatment strategies. Overall, our interdisciplinary approach sketches a novel integrative concept for Social Agents as assistants for burn-out patients.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121357966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, E. André
{"title":"\"Do you trust me?\": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design","authors":"Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, E. André","doi":"10.1145/3308532.3329441","DOIUrl":"https://doi.org/10.1145/3308532.3329441","url":null,"abstract":"While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility. This development led to an on-going resurgence of the research area of explainable artificial intelligence (XAI) which aims to reduce the opaqueness of those black-box-models. However, much of the current XAI-Research is focused on machine learning practitioners and engineers while omitting the specific needs of end-users. In this paper, we examine the impact of virtual agents within the field of XAI on the perceived trustworthiness of autonomous intelligent systems. To assess the practicality of this concept, we conducted a user study based on a simple speech recognition task. As a result of this experiment, we found significant evidence suggesting that the integration of virtual agents into XAI interaction design leads to an increase of trust in the autonomous intelligent system.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"7 3-4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128608585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emmanuel Johnson, Sarah Roediger, Gale M. Lucas, J. Gratch
{"title":"Assessing Common Errors Students Make When Negotiating","authors":"Emmanuel Johnson, Sarah Roediger, Gale M. Lucas, J. Gratch","doi":"10.1145/3308532.3329470","DOIUrl":"https://doi.org/10.1145/3308532.3329470","url":null,"abstract":"Research has shown that virtual agents can be effective tools for teaching negotiation. Virtual agents provide an opportuni-ty for students to practice their negotiation skills which leads to better outcomes. However, these negotiation training agents often lack the ability to understand the errors students make when negotiating, thus limiting their effectiveness as training tools. In this article, we argue that automated opponent-modeling techniques serve as effective methods for diagnos-ing important negotiation mistakes. To demonstrate this, we analyze a large number of participant traces generated while negotiating with a set of automated opponents. We show that negotiators' performance is closely tied to their understanding of an opponent's preferences. We further show that opponent modeling techniques can diagnose specific errors includ-ing: failure to elicit diagnostic information from an opponent, failure to utilize the information that was elicited, and failure to understand the transparency of an opponent. These results show that opponent modeling techniques can be effective methods for diagnosing and potentially correcting crucial ne-gotiation errors.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128468966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masato Fukuda, Hung-Hsuan Huang, K. Kuwabara, T. Nishida
{"title":"Multimodal Assessment on Teaching Skills in a Virtual Rehearsal Environment","authors":"Masato Fukuda, Hung-Hsuan Huang, K. Kuwabara, T. Nishida","doi":"10.1145/3308532.3329449","DOIUrl":"https://doi.org/10.1145/3308532.3329449","url":null,"abstract":"In the training programs for student teachers, the opportunity to practice teaching skills is often limited due to the lack of resources in preparing a rehearsal environment. We are developing a virtual rehearsal environment for teaching practicing with multiple virtual students. In order to provide feedbacks to the student teachers and allow them to improve their skills, automatic assessment on their performance is required. However, it is hard to assess on teaching because the assessment is subjective and often depends on the tacit knowledge of experienced teacher trainers. In this work, we proposed an automatic assessment model based on human assessment done by experienced high school teachers.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127278826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimosthenis Kontogiorgos, André Pereira, Olle Andersson, Marco Koivisto, Elena Gonzalez Rabal, Ville Vartiainen, Joakim Gustafson
{"title":"The Effects of Anthropomorphism and Non-verbal Social Behaviour in Virtual Assistants","authors":"Dimosthenis Kontogiorgos, André Pereira, Olle Andersson, Marco Koivisto, Elena Gonzalez Rabal, Ville Vartiainen, Joakim Gustafson","doi":"10.1145/3308532.3329466","DOIUrl":"https://doi.org/10.1145/3308532.3329466","url":null,"abstract":"The adoption of virtual assistants is growing at a rapid pace. However, these assistants are not optimised to simulate key social aspects of human conversational environments. Humans are intellectually biased toward social activity when facing anthropomorphic agents or when presented with subtle social cues. In this paper, we test whether humans respond the same way to assistants in guided tasks, when in different forms of embodiment and social behaviour. In a within-subject study (N=30), we asked subjects to engage in dialogue with a smart speaker and a social robot. We observed shifting of interactive behaviour, as shown in behavioural and subjective measures. Our findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with nonverbal cues. We found a trade-off between task performance and perceived sociability when controlling for anthropomorphism and social behaviour.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121699849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Job Interviewing Practice for High-Anxiety Populations","authors":"Arno Hartholt, S. Mozgai, Albert A. Rizzo","doi":"10.1145/3308532.3329417","DOIUrl":"https://doi.org/10.1145/3308532.3329417","url":null,"abstract":"We present a versatile system for training job interviewing skills that focuses specifically on segments of the population facing increased challenges during the job application process. In particular, we target those with Autism Spectrum Disorder (ADS), veterans transitioning to civilian life, and former convicts integrating back into society. The system itself follows the SAIBA framework and contains several interviewer characters, who each represent a different type of vocational field, (e.g. service industry, retail, office, etc.) Each interviewer can be set to one of three conversational modes, which not only affects what they say and how they say it, but also their supporting body language. This approach offers varying difficulties, allowing users to start practicing with interviewers who are more encouraging and accommodating before moving on to personalities that are more direct and indifferent. Finally, the user can place the interviewers in different environmental settings (e.g. conference room, restaurant, executive office, etc.), allowing for many different combinations in which to practice.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122120913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}