Predicting an individual's gestures from the interlocutor's co-occurring gestures and related speech

Costanza Navarretta
{"title":"Predicting an individual's gestures from the interlocutor's co-occurring gestures and related speech","authors":"Costanza Navarretta","doi":"10.1109/COGINFOCOM.2016.7804554","DOIUrl":null,"url":null,"abstract":"Overlapping speech and gestures are common in face-to-face conversations and have been interpreted as a sign of synchronization between conversation participants. A number of gestures are even mirrored or mimicked. Therefore, we hypothesize that the gestures of a subject can contribute to the prediction of gestures of the same type of the other subject. In this work, we also want to determine whether the speech segments to which these gestures are related to contribute to the prediction. The results of our pilot experiments show that a Naive Bayes classifier trained on the duration and shape features of head movements and facial expressions contributes to the identification of the presence and shape of head movements and facial expressions respectively. Speech only contributes to prediction in the case of facial expressions. The obtained results show that the gestures of the interlocutors are one of the numerous factors to be accounted for when modeling gesture production in conversational interactions and this is relevant to the development of socio-cognitive ICT.","PeriodicalId":440408,"journal":{"name":"2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COGINFOCOM.2016.7804554","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Overlapping speech and gestures are common in face-to-face conversations and have been interpreted as a sign of synchronization between conversation participants. A number of gestures are even mirrored or mimicked. Therefore, we hypothesize that the gestures of a subject can contribute to the prediction of gestures of the same type of the other subject. In this work, we also want to determine whether the speech segments to which these gestures are related to contribute to the prediction. The results of our pilot experiments show that a Naive Bayes classifier trained on the duration and shape features of head movements and facial expressions contributes to the identification of the presence and shape of head movements and facial expressions respectively. Speech only contributes to prediction in the case of facial expressions. The obtained results show that the gestures of the interlocutors are one of the numerous factors to be accounted for when modeling gesture production in conversational interactions and this is relevant to the development of socio-cognitive ICT.
从对话者共同出现的手势和相关的言语中预测个体的手势
在面对面交谈中,重叠的语言和手势很常见,这被解释为对话参与者之间同步的标志。许多手势甚至被镜像或模仿。因此,我们假设一个受试者的手势可以帮助预测另一个受试者的相同类型的手势。在这项工作中,我们还想确定与这些手势相关的语音片段是否有助于预测。我们的试点实验结果表明,基于头部运动和面部表情的持续时间和形状特征训练的朴素贝叶斯分类器有助于分别识别头部运动和面部表情的存在和形状。语音只在面部表情的情况下有助于预测。所获得的结果表明,对话者的手势是在会话互动中建模手势产生时需要考虑的众多因素之一,这与社会认知信息通信技术的发展有关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信