Carolyn Saund, Haley Matuszak, Anna Weinstein, Stacy Marsella
{"title":"Motion and Meaning: Data-Driven Analyses of The Relationship Between Gesture and Communicative Semantics","authors":"Carolyn Saund, Haley Matuszak, Anna Weinstein, Stacy Marsella","doi":"10.1145/3527188.3561941","DOIUrl":null,"url":null,"abstract":"Gestures convey critical information within social interactions. As such, the success of virtual agents (VA) in both building social relationships and achieving their goals is heavily dependent on the information conveyed within their gestures. Because of the precision required for effective gesture behavior, it is prudent to retain some designer control over these conversational gestures. However, in order to exercise that control practically we must first understand how gestural motion conveys meaning. One consideration in this relationship between motion and meaning is the notion of Ideational Units, meaning that only parts of a gesture’s motion at a point in time may convey meaning, while other parts may be held from the previous gesture. In this paper, we develop, demonstrate, and release a set of tools that help quantify the relationship between the semantics conveyed in a gesture’s co-speech utterance and the fine-grained motion of that gesture. This allows us to explore insights into the complex relationship between motion and meaning. In particular, we use spectral motion clustering to discern patterns of motion that tend to be associated with semantic concepts, on both an aggregate and individual-speaker level. We then discuss the potential for these tools to serve as a framework for both automated gesture generation and interpretation in virtual agents. These tools can ideally be used within approaches to automating VA gesture performances as well as serve as an analysis framework for fundamental gesture research.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 10th International Conference on Human-Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3527188.3561941","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Gestures convey critical information within social interactions. As such, the success of virtual agents (VA) in both building social relationships and achieving their goals is heavily dependent on the information conveyed within their gestures. Because of the precision required for effective gesture behavior, it is prudent to retain some designer control over these conversational gestures. However, in order to exercise that control practically we must first understand how gestural motion conveys meaning. One consideration in this relationship between motion and meaning is the notion of Ideational Units, meaning that only parts of a gesture’s motion at a point in time may convey meaning, while other parts may be held from the previous gesture. In this paper, we develop, demonstrate, and release a set of tools that help quantify the relationship between the semantics conveyed in a gesture’s co-speech utterance and the fine-grained motion of that gesture. This allows us to explore insights into the complex relationship between motion and meaning. In particular, we use spectral motion clustering to discern patterns of motion that tend to be associated with semantic concepts, on both an aggregate and individual-speaker level. We then discuss the potential for these tools to serve as a framework for both automated gesture generation and interpretation in virtual agents. These tools can ideally be used within approaches to automating VA gesture performances as well as serve as an analysis framework for fundamental gesture research.