Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents最新文献

筛选
英文 中文
Towards a Conversational Interface for Authoring Intelligent Virtual Characters 面向智能虚拟人物创作的对话界面
Xinyi Wang, Samuel S. Sohn, Mubbasir Kapadia
{"title":"Towards a Conversational Interface for Authoring Intelligent Virtual Characters","authors":"Xinyi Wang, Samuel S. Sohn, Mubbasir Kapadia","doi":"10.1145/3308532.3329431","DOIUrl":"https://doi.org/10.1145/3308532.3329431","url":null,"abstract":"The collaboration between creatives and domain architects is crucial for bringing virtual characters to life. Domain architects are technical experts who are tasked with formally designing intelligent virtual characters' domain knowledge, which is a symbolic representation of knowledge that the character uses to reason over its interactions with other agents. In the context of this work, domain knowledge encompasses the mental modeling of the character. Although the creation of interactive narratives requires substantial engineering expertise, it is also necessary to pick the brains of writers, artists, and animators alike to give the characters a boost of peculiarities. This intrinsically collaborative and interdisciplinary process brings about the challenge of bridging different mindsets and workflows in an efficient and effective way. The conventional authoring process for virtual characters is heavily driven by engineering needs (shown in Figure 1a). This process burdens creative authors with inconsistent and cumbersome tasks, leaving little room for imagination and improvisation. As the intelligent system goes through updates, creatives are forced to adjust to new tools and take on new tasks in order to satisfy demands for creative input. Inconsistency and the lack of formality result in ineffective communication, repetitive tasks, underused data, and consequently, content of compromised quality","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121125599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Simulating Visual Acuity for Autonomous Agent: A Data-Driven Approach 自主智能体视觉敏锐度模拟:数据驱动方法
Nicholas Hoyte, Curtis L Gittens, M. Katchabaw
{"title":"Simulating Visual Acuity for Autonomous Agent: A Data-Driven Approach","authors":"Nicholas Hoyte, Curtis L Gittens, M. Katchabaw","doi":"10.1145/3308532.3329428","DOIUrl":"https://doi.org/10.1145/3308532.3329428","url":null,"abstract":"The system that links intelligent agents to their world is their synthetic senses, allowing them to perceive and interact with the world around them. How such a system is modelled is important since an agent uses the data generated by its synthetic senses to make decisions and change behaviours. This paper will discuss a data-driven synthetic sight model for an autonomous agent that incorporates the concepts of human peripheral vision and visual acuity. We have developed and implemented a synthetic sight model that facilitates a good simulation of these physiological aspects of human sight.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116589047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Belief-based Agent Explanations to Encourage Behaviour Change 鼓励行为改变的基于信念的代理解释
Amal Abdulrahman, Deborah Richards, Hedieh Ranjbartabar, S. Mascarenhas
{"title":"Belief-based Agent Explanations to Encourage Behaviour Change","authors":"Amal Abdulrahman, Deborah Richards, Hedieh Ranjbartabar, S. Mascarenhas","doi":"10.1145/3308532.3329444","DOIUrl":"https://doi.org/10.1145/3308532.3329444","url":null,"abstract":"Explainable? virtual agents provide insight into the agent's decision-making process, which aims to improve the user's acceptance of the agent's actions or recommendations. However, explainable agents commonly rely on their own knowledge and goals in providing explanations, rather than the beliefs, plans or goals of the user. Little is known about the user perception of such tailored explanations and their impact on their behaviour change. In this paper, we explore the role of belief-based explanation by proposing a user-aware explainable agent by embedding the cognitive agent architecture with a user model and explanation engine to provide a tailored explanation. To make a clear conclusion on the role of explanation in behaviour change intentions, we investigated whether the level of behaviour change intentions is due to building agent-user rapport through the use of empathic language or due to trusting the agent's understanding through providing explanation. Hence, we designed two versions of a virtual advisor agent, empathic and neutral, to reduce study stress among university students and measured students' rapport levels and intentions to change their behaviour. Our results showed that the agent could build a trusted relationship with the user with the help of the explanation regardless of the level of rapport. The results, further, showed that nearly all the recommendations provided by the agent highly significantly increased the intention of the user to change their behavior.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131202146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Gesture Class Prediction by Recurrent Neural Network and Attention Mechanism 基于递归神经网络和注意机制的手势类预测
Fajrian Yunus, C. Clavel, C. Pelachaud
{"title":"Gesture Class Prediction by Recurrent Neural Network and Attention Mechanism","authors":"Fajrian Yunus, C. Clavel, C. Pelachaud","doi":"10.1145/3308532.3329458","DOIUrl":"https://doi.org/10.1145/3308532.3329458","url":null,"abstract":"Our objective is to develop a machine-learning model that allows a virtual agent to automatically perform appropriate communicative gestures. Our first step is to compute when a gesture should be performed. We express this as classification problem. We initially split the data into NoGesture class and HasGesture class. We develop a model based on recurrent neural network with attention mechanism to compute the class based on the speech prosody. We apply the model on a dialog corpus segmented into different gesture classes and gesture phases. We treat the prosody as the input sequence and the gesture classes as the output sequence.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132641712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Influence of Directivity on the Perception of Embodied Conversational Agents' Speech 指向性对具身会话主体言语感知的影响
J. Wendt, B. Weyers, J. Stienen, A. Bönsch, M. Vorländer, T. Kuhlen
{"title":"Influence of Directivity on the Perception of Embodied Conversational Agents' Speech","authors":"J. Wendt, B. Weyers, J. Stienen, A. Bönsch, M. Vorländer, T. Kuhlen","doi":"10.1145/3308532.3329434","DOIUrl":"https://doi.org/10.1145/3308532.3329434","url":null,"abstract":"Embodied conversational agents become more and more important in various virtual reality applications, e.g., as peers, trainers or therapists. Besides their appearance and behavior, appropriate speech is required for them to be perceived as human-like and realistic. Additionally to the used voice signal, also its auralization in the immersive virtual environment has to be believable. Therefore, we investigated the effect of adding directivity to the speech sound source. Directivity simulates the orientation dependent auralization with regard to the agent's head orientation. We performed a one-factorial user study with two levels (n=35) to investigate the effect directivity has on the perceived social presence and realism of the agent's voice. Our results do not indicate any significant effects regarding directivity on both variables covered. We account this partly to an overall too low realism of the virtual agent, a not overly social utilized scenario and generally high variance of the examined measures. These results are critically discussed and potential further research questions and study designs are identified.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115880774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
How do Leaders Perceive Stress and Followership from Nonverbal Behaviors Displayed by Virtual Followers? 领导者如何从虚拟追随者的非语言行为中感知压力和追随?
Guillaume Demary, Jean-Claude Martin, S. Dubourdieu, S. Travers, Virginie Demulier
{"title":"How do Leaders Perceive Stress and Followership from Nonverbal Behaviors Displayed by Virtual Followers?","authors":"Guillaume Demary, Jean-Claude Martin, S. Dubourdieu, S. Travers, Virginie Demulier","doi":"10.1145/3308532.3329468","DOIUrl":"https://doi.org/10.1145/3308532.3329468","url":null,"abstract":"Managing a medical team in emergency situations requires not only technical but also non-technical skills. Leaders must train to manage different types of subordinates, and how these subordinates will respond to orders and stressful events. Before designing virtual training environments for these leaders, it is necessary to understand how leaders perceive the nonverbal behaviors of virtual characters playing the role of subordinates. In this article, we describe a study we conducted to explore how leaders categorize virtual subordinates from the non-verbal expressions they display (i.e., facial expressions, torso orientation, gaze direction). We analyze how these multimodal behaviors impact the perception of follower style (proactive vs. passive, insubordination), interpersonal attitudes (dominance vs. submission) and stress. Our results suggest that leaders categorize virtual subordinates via nonverbal behaviors that are also perceived as signs of stress and interpersonal attitudes.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Animating Virtual Signers: The Issue of Gestural Anonymization 动画虚拟签名者:手势匿名化问题
Félix Bigand, E. Prigent, Annelies Braffort
{"title":"Animating Virtual Signers: The Issue of Gestural Anonymization","authors":"Félix Bigand, E. Prigent, Annelies Braffort","doi":"10.1145/3308532.3329410","DOIUrl":"https://doi.org/10.1145/3308532.3329410","url":null,"abstract":"This paper presents an ongoing PhD research project on visual perception and motion analysis applied to virtual signers (virtual agents used for Sign Language interaction). Virtual signers (or signing avatars) play an important role in the accesibility of information in sign languages. They have been developed notably for their capability to anonymize shape and ap-pearance of the content producer. While motion capture provides human-like, realistic and comprehensible signing animations, it also arises the question of anonymity. Human body movements contain important information about a person's identity, gender or emotional state. In the present work, we want to address the problem of gestural identity in the context of animated agents in French Sign Language. On the one hand, the ability to identify a person from signing motion is assessed through psychophysical experiments, using point-light displays. On the other hand, a computational framework is developed in order to investigate which features are critical for person identification and to control them over the virtual agent.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128526822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Modelling Therapeutic Alliance using a User-aware Explainable Embodied Conversational Agent to Promote Treatment Adherence 使用用户感知可解释的具身会话代理对治疗联盟进行建模以促进治疗依从性
Amal Abdulrahman, Deborah Richards
{"title":"Modelling Therapeutic Alliance using a User-aware Explainable Embodied Conversational Agent to Promote Treatment Adherence","authors":"Amal Abdulrahman, Deborah Richards","doi":"10.1145/3308532.3329413","DOIUrl":"https://doi.org/10.1145/3308532.3329413","url":null,"abstract":"Non-adherence to a treatment plan recommended by the therapist is a key cause of the increasing rate of chronic medical conditions globally. The therapist-patient therapeutic alliance is regarded as a successful intervention and a good predictor of treatment adherence. Similar to the human scenario, embodied conversational agents (ECAs) showed evidence of their ability to build an agent-patient therapeutic alliance, which motivates the effort to advance ECAs as a potential solution to improve treatment adherence and consequently the health outcome. Building therapeutic alliance implies the need for a positive environment where the ECA and the patient can share their knowledge and discuss their goals, preferences and tasks towards building a shared plan, which is commonly done using explanations. However, explainable agents commonly rely on their own knowledge and goals in providing explanations, rather than the beliefs, plans or goals of the user. It is not clear whether such explanations, in individual-specific contexts such as personal health assistance, are perceived by the user as relevant in decision-making towards their own behavior change. Therefore, in this research, we are developing a user-aware explainable ECA by embedding the cognitive agent architecture with a user model, explanation engine and modified planner to implement the concept of SharedPlans. The developed agent will be deployed and evaluated with real patients and the therapeutic alliance will be measured using standard measurements.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117057049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Expressive Virtual Human: Impact of expressive wrinkles and pupillary size on emotion recognition 表情虚拟人:表情皱纹和瞳孔大小对情绪识别的影响
Anne-Sophie Milcent, Erik Geslin, Abdelmajid Kadri, S. Richir
{"title":"Expressive Virtual Human: Impact of expressive wrinkles and pupillary size on emotion recognition","authors":"Anne-Sophie Milcent, Erik Geslin, Abdelmajid Kadri, S. Richir","doi":"10.1145/3308532.3329446","DOIUrl":"https://doi.org/10.1145/3308532.3329446","url":null,"abstract":"Improving the expressiveness of virtual humans is essential for qualitative interactions and development of an emotional bond. It is certainly indicated for all applications using the user's cognitive processes, such as applications dedicated to training or health. Our study aims to contribute to the design of an expressive virtual human, by identifying and adapting visual factors promoting transcription of emotions. In this paper, we investigate the effect of expressive wrinkles and variation of pupil size. We propose to compare the recognition of basic emotions on a real human and on an expressive virtual human. The virtual human was subject to two different factors: expressive wrinkles and/or pupil size. Our results indicate that emotion recognition rates on the virtual agent are high. Moreover, expressive wrinkles affect emotion recognition. The effect of pupillary size is less significant. However, both are recommended to design an expressive virtual human.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127745792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Evaluating Temporal Predictive Features for Virtual Patients Feedbacks 评估虚拟患者反馈的时间预测特征
B. Penteado, M. Ochs, R. Bertrand, P. Blache
{"title":"Evaluating Temporal Predictive Features for Virtual Patients Feedbacks","authors":"B. Penteado, M. Ochs, R. Bertrand, P. Blache","doi":"10.1145/3308532.3329426","DOIUrl":"https://doi.org/10.1145/3308532.3329426","url":null,"abstract":"In the intelligent virtual agent domain, several machine learning models have been proposed to automatically determine the feedbacks of virtual agents during an interaction, using human-human interaction datasets as training corpora and most commonly based on verbal and prosodic features citeMorency2010, Truong2010a. These approaches suppose an accurate system to automatically recognize speech and prosody. That makes the overall model's performance dependent on the individual performances of speech and prosody recognizers. As a consequence, one challenge remains to identify features that could be easily and accurately recognized during a human-machine interaction for predicting virtual agents' feedbacks in real time.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122294821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书