{"title":"Towards a Conversational Interface for Authoring Intelligent Virtual Characters","authors":"Xinyi Wang, Samuel S. Sohn, Mubbasir Kapadia","doi":"10.1145/3308532.3329431","DOIUrl":"https://doi.org/10.1145/3308532.3329431","url":null,"abstract":"The collaboration between creatives and domain architects is crucial for bringing virtual characters to life. Domain architects are technical experts who are tasked with formally designing intelligent virtual characters' domain knowledge, which is a symbolic representation of knowledge that the character uses to reason over its interactions with other agents. In the context of this work, domain knowledge encompasses the mental modeling of the character. Although the creation of interactive narratives requires substantial engineering expertise, it is also necessary to pick the brains of writers, artists, and animators alike to give the characters a boost of peculiarities. This intrinsically collaborative and interdisciplinary process brings about the challenge of bridging different mindsets and workflows in an efficient and effective way. The conventional authoring process for virtual characters is heavily driven by engineering needs (shown in Figure 1a). This process burdens creative authors with inconsistent and cumbersome tasks, leaving little room for imagination and improvisation. As the intelligent system goes through updates, creatives are forced to adjust to new tools and take on new tasks in order to satisfy demands for creative input. Inconsistency and the lack of formality result in ineffective communication, repetitive tasks, underused data, and consequently, content of compromised quality","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121125599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulating Visual Acuity for Autonomous Agent: A Data-Driven Approach","authors":"Nicholas Hoyte, Curtis L Gittens, M. Katchabaw","doi":"10.1145/3308532.3329428","DOIUrl":"https://doi.org/10.1145/3308532.3329428","url":null,"abstract":"The system that links intelligent agents to their world is their synthetic senses, allowing them to perceive and interact with the world around them. How such a system is modelled is important since an agent uses the data generated by its synthetic senses to make decisions and change behaviours. This paper will discuss a data-driven synthetic sight model for an autonomous agent that incorporates the concepts of human peripheral vision and visual acuity. We have developed and implemented a synthetic sight model that facilitates a good simulation of these physiological aspects of human sight.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116589047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amal Abdulrahman, Deborah Richards, Hedieh Ranjbartabar, S. Mascarenhas
{"title":"Belief-based Agent Explanations to Encourage Behaviour Change","authors":"Amal Abdulrahman, Deborah Richards, Hedieh Ranjbartabar, S. Mascarenhas","doi":"10.1145/3308532.3329444","DOIUrl":"https://doi.org/10.1145/3308532.3329444","url":null,"abstract":"Explainable? virtual agents provide insight into the agent's decision-making process, which aims to improve the user's acceptance of the agent's actions or recommendations. However, explainable agents commonly rely on their own knowledge and goals in providing explanations, rather than the beliefs, plans or goals of the user. Little is known about the user perception of such tailored explanations and their impact on their behaviour change. In this paper, we explore the role of belief-based explanation by proposing a user-aware explainable agent by embedding the cognitive agent architecture with a user model and explanation engine to provide a tailored explanation. To make a clear conclusion on the role of explanation in behaviour change intentions, we investigated whether the level of behaviour change intentions is due to building agent-user rapport through the use of empathic language or due to trusting the agent's understanding through providing explanation. Hence, we designed two versions of a virtual advisor agent, empathic and neutral, to reduce study stress among university students and measured students' rapport levels and intentions to change their behaviour. Our results showed that the agent could build a trusted relationship with the user with the help of the explanation regardless of the level of rapport. The results, further, showed that nearly all the recommendations provided by the agent highly significantly increased the intention of the user to change their behavior.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131202146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gesture Class Prediction by Recurrent Neural Network and Attention Mechanism","authors":"Fajrian Yunus, C. Clavel, C. Pelachaud","doi":"10.1145/3308532.3329458","DOIUrl":"https://doi.org/10.1145/3308532.3329458","url":null,"abstract":"Our objective is to develop a machine-learning model that allows a virtual agent to automatically perform appropriate communicative gestures. Our first step is to compute when a gesture should be performed. We express this as classification problem. We initially split the data into NoGesture class and HasGesture class. We develop a model based on recurrent neural network with attention mechanism to compute the class based on the speech prosody. We apply the model on a dialog corpus segmented into different gesture classes and gesture phases. We treat the prosody as the input sequence and the gesture classes as the output sequence.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132641712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modelling Therapeutic Alliance using a User-aware Explainable Embodied Conversational Agent to Promote Treatment Adherence","authors":"Amal Abdulrahman, Deborah Richards","doi":"10.1145/3308532.3329413","DOIUrl":"https://doi.org/10.1145/3308532.3329413","url":null,"abstract":"Non-adherence to a treatment plan recommended by the therapist is a key cause of the increasing rate of chronic medical conditions globally. The therapist-patient therapeutic alliance is regarded as a successful intervention and a good predictor of treatment adherence. Similar to the human scenario, embodied conversational agents (ECAs) showed evidence of their ability to build an agent-patient therapeutic alliance, which motivates the effort to advance ECAs as a potential solution to improve treatment adherence and consequently the health outcome. Building therapeutic alliance implies the need for a positive environment where the ECA and the patient can share their knowledge and discuss their goals, preferences and tasks towards building a shared plan, which is commonly done using explanations. However, explainable agents commonly rely on their own knowledge and goals in providing explanations, rather than the beliefs, plans or goals of the user. It is not clear whether such explanations, in individual-specific contexts such as personal health assistance, are perceived by the user as relevant in decision-making towards their own behavior change. Therefore, in this research, we are developing a user-aware explainable ECA by embedding the cognitive agent architecture with a user model, explanation engine and modified planner to implement the concept of SharedPlans. The developed agent will be deployed and evaluated with real patients and the therapeutic alliance will be measured using standard measurements.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117057049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Wendt, B. Weyers, J. Stienen, A. Bönsch, M. Vorländer, T. Kuhlen
{"title":"Influence of Directivity on the Perception of Embodied Conversational Agents' Speech","authors":"J. Wendt, B. Weyers, J. Stienen, A. Bönsch, M. Vorländer, T. Kuhlen","doi":"10.1145/3308532.3329434","DOIUrl":"https://doi.org/10.1145/3308532.3329434","url":null,"abstract":"Embodied conversational agents become more and more important in various virtual reality applications, e.g., as peers, trainers or therapists. Besides their appearance and behavior, appropriate speech is required for them to be perceived as human-like and realistic. Additionally to the used voice signal, also its auralization in the immersive virtual environment has to be believable. Therefore, we investigated the effect of adding directivity to the speech sound source. Directivity simulates the orientation dependent auralization with regard to the agent's head orientation. We performed a one-factorial user study with two levels (n=35) to investigate the effect directivity has on the perceived social presence and realism of the agent's voice. Our results do not indicate any significant effects regarding directivity on both variables covered. We account this partly to an overall too low realism of the virtual agent, a not overly social utilized scenario and generally high variance of the examined measures. These results are critically discussed and potential further research questions and study designs are identified.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115880774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guillaume Demary, Jean-Claude Martin, S. Dubourdieu, S. Travers, Virginie Demulier
{"title":"How do Leaders Perceive Stress and Followership from Nonverbal Behaviors Displayed by Virtual Followers?","authors":"Guillaume Demary, Jean-Claude Martin, S. Dubourdieu, S. Travers, Virginie Demulier","doi":"10.1145/3308532.3329468","DOIUrl":"https://doi.org/10.1145/3308532.3329468","url":null,"abstract":"Managing a medical team in emergency situations requires not only technical but also non-technical skills. Leaders must train to manage different types of subordinates, and how these subordinates will respond to orders and stressful events. Before designing virtual training environments for these leaders, it is necessary to understand how leaders perceive the nonverbal behaviors of virtual characters playing the role of subordinates. In this article, we describe a study we conducted to explore how leaders categorize virtual subordinates from the non-verbal expressions they display (i.e., facial expressions, torso orientation, gaze direction). We analyze how these multimodal behaviors impact the perception of follower style (proactive vs. passive, insubordination), interpersonal attitudes (dominance vs. submission) and stress. Our results suggest that leaders categorize virtual subordinates via nonverbal behaviors that are also perceived as signs of stress and interpersonal attitudes.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AIMER","authors":"A. Delamarre, Cédric Buche, C. Lisetti","doi":"10.1145/3308532.3329419","DOIUrl":"https://doi.org/10.1145/3308532.3329419","url":null,"abstract":"Elementary school classrooms are emotionally stressful environments, for both students and teachers. Successful teachers use strategies that regulate students' emotions and behaviors while also controlling their own emotions (stress, nervousness). To prepare teachers for the challenges of teaching, teacher training should include emotional and behavioral management strategies. Virtual Training Environments (VTEs) are effective at providing experiences and increasing learning in many domains. Creating VTEs for teachers can improve student learning and teacher retention. We introduce our current research aimed at integrating emotionally-intelligent virtual students within a 3D classroom training system. In our simulation, virtual students' emotional states will be determined from an appraisal process of actions taken by the teacher trainee in the virtual classroom. Virtual students will then display the appropriate non-verbal behaviors and react to the teacher accordingly. We present the first steps required to implement our proposed architecture which are based on appraisal theory of emotions and emotion regulation theory.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122079253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Animating Virtual Signers: The Issue of Gestural Anonymization","authors":"Félix Bigand, E. Prigent, Annelies Braffort","doi":"10.1145/3308532.3329410","DOIUrl":"https://doi.org/10.1145/3308532.3329410","url":null,"abstract":"This paper presents an ongoing PhD research project on visual perception and motion analysis applied to virtual signers (virtual agents used for Sign Language interaction). Virtual signers (or signing avatars) play an important role in the accesibility of information in sign languages. They have been developed notably for their capability to anonymize shape and ap-pearance of the content producer. While motion capture provides human-like, realistic and comprehensible signing animations, it also arises the question of anonymity. Human body movements contain important information about a person's identity, gender or emotional state. In the present work, we want to address the problem of gestural identity in the context of animated agents in French Sign Language. On the one hand, the ability to identify a person from signing motion is assessed through psychophysical experiments, using point-light displays. On the other hand, a computational framework is developed in order to investigate which features are critical for person identification and to control them over the virtual agent.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128526822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developmental Autonomous Learning: AI, Cognitive Sciences and Educational Technology","authors":"Pierre-Yves Oudeyer","doi":"10.1145/3308532.3337710","DOIUrl":"https://doi.org/10.1145/3308532.3337710","url":null,"abstract":"Current approaches to AI and machine learning are still fundamentally limited in comparison with autonomous learning capabilities of children. What is remarkable is not that some children become world champions in certain games or specialties: it is rather their autonomy, flexibility and efficiency at learning many everyday skills under strongly limited resources of time, computation and energy. And they do not need the intervention of an engineer for each new task (e.g. they do not need someone to provide a new task specific reward function). XX I will present a research program that has focused on computational modeling of child development and learning mechanisms in the last decade. I will discuss several developmental forces that guide exploration in large real world spaces, starting from the perspective of how algorithmic models can help us understand better how they work in humans, and in return how this opens new approaches to autonomous machine learning. XX In particular, I will discuss models of curiosity-driven autonomous learning, enabling machines to sample and explore their own goals and their own learning strategies, self-organizing a learning curriculum without any external reward or supervision. XX I will show how this has helped scientists understand better aspects of human development such as the emergence of developmental transitions between object manipulation, tool use and speech. I will also show how the use of real robotic platforms for evaluating these models has led to highly efficient unsupervised learning methods, enabling robots to discover and learn multiple skills in high-dimensions in a handful of hours. I will discuss how these techniques are now being integrated with modern deep learning methods. XX Finally, I will show how these models and techniques can be successfully applied in the domain of educational technologies, enabling to personalize sequences of exercises for human learners, while maximizing both learning efficiency and intrinsic motivation. I will illustrate this with a large-scale experiment recently performed in primary schools, enabling children of all levels to improve their skills and motivation in learning aspects of mathematics.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122481808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}