{"title":"从对话者共同出现的手势和相关的言语中预测个体的手势","authors":"Costanza Navarretta","doi":"10.1109/COGINFOCOM.2016.7804554","DOIUrl":null,"url":null,"abstract":"Overlapping speech and gestures are common in face-to-face conversations and have been interpreted as a sign of synchronization between conversation participants. A number of gestures are even mirrored or mimicked. Therefore, we hypothesize that the gestures of a subject can contribute to the prediction of gestures of the same type of the other subject. In this work, we also want to determine whether the speech segments to which these gestures are related to contribute to the prediction. The results of our pilot experiments show that a Naive Bayes classifier trained on the duration and shape features of head movements and facial expressions contributes to the identification of the presence and shape of head movements and facial expressions respectively. Speech only contributes to prediction in the case of facial expressions. The obtained results show that the gestures of the interlocutors are one of the numerous factors to be accounted for when modeling gesture production in conversational interactions and this is relevant to the development of socio-cognitive ICT.","PeriodicalId":440408,"journal":{"name":"2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Predicting an individual's gestures from the interlocutor's co-occurring gestures and related speech\",\"authors\":\"Costanza Navarretta\",\"doi\":\"10.1109/COGINFOCOM.2016.7804554\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Overlapping speech and gestures are common in face-to-face conversations and have been interpreted as a sign of synchronization between conversation participants. A number of gestures are even mirrored or mimicked. Therefore, we hypothesize that the gestures of a subject can contribute to the prediction of gestures of the same type of the other subject. In this work, we also want to determine whether the speech segments to which these gestures are related to contribute to the prediction. The results of our pilot experiments show that a Naive Bayes classifier trained on the duration and shape features of head movements and facial expressions contributes to the identification of the presence and shape of head movements and facial expressions respectively. Speech only contributes to prediction in the case of facial expressions. The obtained results show that the gestures of the interlocutors are one of the numerous factors to be accounted for when modeling gesture production in conversational interactions and this is relevant to the development of socio-cognitive ICT.\",\"PeriodicalId\":440408,\"journal\":{\"name\":\"2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COGINFOCOM.2016.7804554\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COGINFOCOM.2016.7804554","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Predicting an individual's gestures from the interlocutor's co-occurring gestures and related speech
Overlapping speech and gestures are common in face-to-face conversations and have been interpreted as a sign of synchronization between conversation participants. A number of gestures are even mirrored or mimicked. Therefore, we hypothesize that the gestures of a subject can contribute to the prediction of gestures of the same type of the other subject. In this work, we also want to determine whether the speech segments to which these gestures are related to contribute to the prediction. The results of our pilot experiments show that a Naive Bayes classifier trained on the duration and shape features of head movements and facial expressions contributes to the identification of the presence and shape of head movements and facial expressions respectively. Speech only contributes to prediction in the case of facial expressions. The obtained results show that the gestures of the interlocutors are one of the numerous factors to be accounted for when modeling gesture production in conversational interactions and this is relevant to the development of socio-cognitive ICT.