Factor exploration of gestural stroke choice in the context of ambiguous instruction utterances: challenges to synthesizing semantic gesture from speech alone
{"title":"Factor exploration of gestural stroke choice in the context of ambiguous instruction utterances: challenges to synthesizing semantic gesture from speech alone","authors":"N. DePalma, J. Hodgins","doi":"10.1109/RO-MAN50785.2021.9515416","DOIUrl":null,"url":null,"abstract":"Current models of gesture synthesis focus primarily on a speech signal to synthesize gestures. In this paper, we take a critical look at this approach from the point of view of gesture’s tendency to disambiguate the verbal component of the expression. We identify and contribute an analysis of three challenge factors for these models: 1) synthesizing gesture in the presence of ambiguous utterances seems to be a overwhelmingly useful case for gesture production yet is not at present supported by present day models of gesture generation, 2) finding the best f-formation to convey spatial gestural information like gesturing directions makes a significant difference for everyday users and must be taken into account, and 3) assuming that captured human motion is a plentiful and easy source for retargeting gestural motion may not yet take into account the readability of gestures under kinematically constrained feasibility spaces.Recent approaches to generate gesture for agents[1] and robots [2] treat gesture as co-speech that is strictly dependent on verbal utterances. Evidence suggests that gesture selection may leverage task context so it is not dependent on verbal utterance only. This effect is particularly evident when attempting to generate gestures from ambiguous verbal utterances (e.g. \"You do this when you get to the fork in the road\"). Decoupling this strict dependency may allow gesture to be synthesized for the purpose of clarification of the ambiguous verbal utterance.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"102-109"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN50785.2021.9515416","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Current models of gesture synthesis focus primarily on a speech signal to synthesize gestures. In this paper, we take a critical look at this approach from the point of view of gesture’s tendency to disambiguate the verbal component of the expression. We identify and contribute an analysis of three challenge factors for these models: 1) synthesizing gesture in the presence of ambiguous utterances seems to be a overwhelmingly useful case for gesture production yet is not at present supported by present day models of gesture generation, 2) finding the best f-formation to convey spatial gestural information like gesturing directions makes a significant difference for everyday users and must be taken into account, and 3) assuming that captured human motion is a plentiful and easy source for retargeting gestural motion may not yet take into account the readability of gestures under kinematically constrained feasibility spaces.Recent approaches to generate gesture for agents[1] and robots [2] treat gesture as co-speech that is strictly dependent on verbal utterances. Evidence suggests that gesture selection may leverage task context so it is not dependent on verbal utterance only. This effect is particularly evident when attempting to generate gestures from ambiguous verbal utterances (e.g. "You do this when you get to the fork in the road"). Decoupling this strict dependency may allow gesture to be synthesized for the purpose of clarification of the ambiguous verbal utterance.