Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents最新文献

筛选
英文 中文
Task Allocation in Multi-Agent Systems with Grammar-Based Evolution 基于语法进化的多智能体系统任务分配
Dilini Samarasinghe, M. Barlow, E. Lakshika, Kathryn E. Kasmarik
{"title":"Task Allocation in Multi-Agent Systems with Grammar-Based Evolution","authors":"Dilini Samarasinghe, M. Barlow, E. Lakshika, Kathryn E. Kasmarik","doi":"10.1145/3472306.3478337","DOIUrl":"https://doi.org/10.1145/3472306.3478337","url":null,"abstract":"This paper presents a grammar-based evolutionary model to facilitate autonomous emergence of task allocation for intelligent multi-agent systems. The approach adopts a context-free grammar to determine the behaviour rule syntax. This allows for flexibility in evolving task allocation under multiple and dynamic constraints without manual rule design and parameter tuning. Experimental evaluations conducted with a target discovery simulation illustrate that the grammar-based model performs successfully in both dynamic and non-dynamic conditions. A statistically significant performance improvement is shown compared to an algorithm developed with the broadcast of local eligibility mechanism and a genetic programming mechanism. Grammatical evolution can achieve near-optimal solutions under restrictions applied on the number of agents, targets and the time allowed. Further, analysis of the evolved rule structures shows that grammatical evolution can identify less complex rule structures for behaviours while maintaining the expected level of performance. The results infer that the proposed model is a promising alternative for dynamic task allocation with human interactions in complex real-world domains.","PeriodicalId":148152,"journal":{"name":"Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121178682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Someone or Something to Play With?: An Empirical Study on how Parents Evaluate the Social Appropriateness of Interactions Between Children and Differently Embodied Artificial Interaction Partners 有人或有东西可以玩?:父母评价儿童与不同具身人工互动伙伴互动社会适宜性的实证研究
Jessica M. Szczuka, Hatice S. Güzelbey, N. Krämer
{"title":"Someone or Something to Play With?: An Empirical Study on how Parents Evaluate the Social Appropriateness of Interactions Between Children and Differently Embodied Artificial Interaction Partners","authors":"Jessica M. Szczuka, Hatice S. Güzelbey, N. Krämer","doi":"10.1145/3472306.3478349","DOIUrl":"https://doi.org/10.1145/3472306.3478349","url":null,"abstract":"Children are raised with technologies that are able to respond to them in natural language. This not only makes it easy to communicate but also to connect with them socially. While communication abilities might have benefits (e.g., for learning), it might also raise concerns among parents as the technologies are not necessarily designed to facilitate the children's social, emotional, and cognitive developments and serve as a model for the construction of a social world among humans. First technologies children can talk to differ in their embodiment (e.g., robots and voice assistants), which could affect central variables, such as social presence, trust, and privacy concerns. The present study aimed to investigate how parents conceptualize socially appropriate interactions between children and technologies. The results underline the parents' emphasis on embodiment and privacy protection. The study underlines the importance of incorporating the parental perspective to meet the expectations of responsible interactions between children and technologies.","PeriodicalId":148152,"journal":{"name":"Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128621946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Understanding How Virtual Human's Verbal Persuasion Strategies Influence User Intentions To Perform Health Behavior 了解虚拟人的言语说服策略如何影响用户执行健康行为的意图
Mohan S Zalake, K. Vaddiparti, Pavlo D. Antonenko, Benjamin C. Lok
{"title":"Towards Understanding How Virtual Human's Verbal Persuasion Strategies Influence User Intentions To Perform Health Behavior","authors":"Mohan S Zalake, K. Vaddiparti, Pavlo D. Antonenko, Benjamin C. Lok","doi":"10.1145/3472306.3478345","DOIUrl":"https://doi.org/10.1145/3472306.3478345","url":null,"abstract":"This paper investigates how a virtual human's persuasion attempts influence the user's intentions to perform the recommended behaviors using the Theory of Planned Behavior. The Theory of Planned Behavior suggests that users' attitudes towards the behavior, subjective norms, and perceived behavior control determine user intentions to perform behaviors. Using the Theory of Planned Behavior, we identify the underlying mechanisms of how users' attitudes, subjective norms, and perceived behavior control influence the effectiveness of virtual human's persuasive attempts on user's intentions to perform the behavior. To identify the underlying mechanisms, we conducted an online study with 202 college students. In a between-subjects study, a virtual human persuaded students to use a mental health coping skill using six different persuasion strategies. We present evidence that persuasion strategies influenced the students' perceived behavior control, which further influenced the user intentions to perform the behavior. Additionally, the paper also shows that user personality influenced the effect of persuasion strategies on students' perceived behavior control. This knowledge of underlying mechanisms of how virtual human's persuasion attempts to influence users' intentions to perform the recommended behavior can help in designing effective intelligent virtual humans for persuasion.","PeriodicalId":148152,"journal":{"name":"Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125659210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Speech2Properties2Gestures: Gesture-Property Prediction as a Tool for Generating Representational Gestures from Speech 手势属性预测作为从语音中生成代表性手势的工具
Taras Kucherenko, Rajmund Nagy, Patrik Jonell, Michael Neff, Hedvig Kjellstrom, G. Henter
{"title":"Speech2Properties2Gestures: Gesture-Property Prediction as a Tool for Generating Representational Gestures from Speech","authors":"Taras Kucherenko, Rajmund Nagy, Patrik Jonell, Michael Neff, Hedvig Kjellstrom, G. Henter","doi":"10.1145/3472306.3478333","DOIUrl":"https://doi.org/10.1145/3472306.3478333","url":null,"abstract":"We propose a new framework for gesture generation, aiming to allow data-driven approaches to produce more semantically rich gestures. Our approach first predicts whether to gesture, followed by a prediction of the gesture properties. Those properties are then used as conditioning for a modern probabilistic gesture-generation model capable of high-quality output. This empowers the approach to generate gestures that are both diverse and representational. Follow-ups and more information can be found on the project page: https://svito-zar.github.io/speech2properties2gestures/","PeriodicalId":148152,"journal":{"name":"Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125326674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Learning Speech-driven 3D Conversational Gestures from Video 从视频中学习语音驱动的3D会话手势
I. Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, H. Seidel, Gerard Pons-Moll, Mohamed A. Elgharib, C. Theobalt
{"title":"Learning Speech-driven 3D Conversational Gestures from Video","authors":"I. Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, H. Seidel, Gerard Pons-Moll, Mohamed A. Elgharib, C. Theobalt","doi":"10.1145/3472306.3478335","DOIUrl":"https://doi.org/10.1145/3472306.3478335","url":null,"abstract":"We propose the first approach to synthesize the synchronous 3D conversational body and hand gestures, as well as 3D face and head animations, of a virtual character from speech input. Our algorithm uses a CNN architecture that leverages the inherent correlation between facial expression and hand gestures. Synthesis of conversational body gestures is a multi-modal problem since many similar gestures can plausibly accompany the same input speech. To synthesize plausible body gestures in this setting, we train a Generative Adversarial Network (GAN) based model that measures the plausibility of the generated sequences of 3D body motion when paired with the input audio features. We also contribute a new corpus that contains more than 33 hours of annotated data from in-the-wild videos of talking people. To this end, we apply state-of-the-art monocular approaches for 3D body and hand pose estimation as well as 3D face performance capture to the video corpus. In this way, we can train on orders of magnitude more data than previous algorithms that resort to complex in-studio motion capture solutions, and thereby train more expressive synthesis algorithms. Our experiments and user study show the state-of-the-art quality of our speech-synthesized full 3D character animations.","PeriodicalId":148152,"journal":{"name":"Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents","volume":"269 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131919103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信