{"title":"Authoring Communicative Behaviors for Situated, Embodied Characters","authors":"T. Pejsa","doi":"10.1145/2663204.2667576","DOIUrl":null,"url":null,"abstract":"Embodied conversational agents hold great potential as multimodal interfaces due to their ability to communicate naturally using speech and nonverbal cues. The goal of my research is to enable animators and designers to endow ECAs with interactive behaviors that are controllable, communicatively effective, as well as natural and aesthetically appealing; I focus in particular on spatially situated, communicative nonverbal behaviors such as gaze and deictic gestures. This goal requires addressing challenges in the space of animation authoring and editing, parametric control, behavior coordination and planning, and retargeting to different embodiment designs. My research will aim to provide animators and designers with techniques and tools needed to author natural, expressive, and controllable gaze and gesture movements that leverage empirical or learned models of human behavior, to apply such behaviors to characters with different designs and communicative styles, and to develop techniques and models for planning of coordinated behaviors that economically and correctly convey the range of diverse cues required for multimodal, user-machine interaction.","PeriodicalId":389037,"journal":{"name":"Proceedings of the 16th International Conference on Multimodal Interaction","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2663204.2667576","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Embodied conversational agents hold great potential as multimodal interfaces due to their ability to communicate naturally using speech and nonverbal cues. The goal of my research is to enable animators and designers to endow ECAs with interactive behaviors that are controllable, communicatively effective, as well as natural and aesthetically appealing; I focus in particular on spatially situated, communicative nonverbal behaviors such as gaze and deictic gestures. This goal requires addressing challenges in the space of animation authoring and editing, parametric control, behavior coordination and planning, and retargeting to different embodiment designs. My research will aim to provide animators and designers with techniques and tools needed to author natural, expressive, and controllable gaze and gesture movements that leverage empirical or learned models of human behavior, to apply such behaviors to characters with different designs and communicative styles, and to develop techniques and models for planning of coordinated behaviors that economically and correctly convey the range of diverse cues required for multimodal, user-machine interaction.