{"title":"基于gan的社交机器人交流手势生成方法","authors":"Nguyen Tan Viet Tuyen, A. Elibol, N. Chong","doi":"10.1109/ARSO51874.2021.9542828","DOIUrl":null,"url":null,"abstract":"People use a wide range of non-verbal behaviors to signal their intentions in interpersonal relationships. Being echoed by the proven benefits and impact of people's social interaction skills, considerable attention has been paid to generating non-verbal cues for social robots. In particular, communicative gestures help social robots emphasize the thoughts in their speech, describing something or conveying their feelings using bodily movements. This paper introduces a generative framework for producing communicative gestures to better enforce the semantic contents that social robots express. The proposed model is inspired by the Conditional Generative Adversarial Network and built upon a convolutional neural network. The experimental results confirmed that a variety of motions could be generated for expressing input contexts. The framework can produce synthetic actions defined in a high number of upper body joints, allowing social robots to clearly express sophisticated contexts. Indeed, the fully implemented model shows better performance than the one without Action Encoder and Decoder. Finally, the generated motions were transformed into the target robot and combined with the robot's speech, with an expectation of gaining broad social acceptance.","PeriodicalId":156296,"journal":{"name":"2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A GAN-based Approach to Communicative Gesture Generation for Social Robots\",\"authors\":\"Nguyen Tan Viet Tuyen, A. Elibol, N. Chong\",\"doi\":\"10.1109/ARSO51874.2021.9542828\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"People use a wide range of non-verbal behaviors to signal their intentions in interpersonal relationships. Being echoed by the proven benefits and impact of people's social interaction skills, considerable attention has been paid to generating non-verbal cues for social robots. In particular, communicative gestures help social robots emphasize the thoughts in their speech, describing something or conveying their feelings using bodily movements. This paper introduces a generative framework for producing communicative gestures to better enforce the semantic contents that social robots express. The proposed model is inspired by the Conditional Generative Adversarial Network and built upon a convolutional neural network. The experimental results confirmed that a variety of motions could be generated for expressing input contexts. The framework can produce synthetic actions defined in a high number of upper body joints, allowing social robots to clearly express sophisticated contexts. Indeed, the fully implemented model shows better performance than the one without Action Encoder and Decoder. Finally, the generated motions were transformed into the target robot and combined with the robot's speech, with an expectation of gaining broad social acceptance.\",\"PeriodicalId\":156296,\"journal\":{\"name\":\"2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ARSO51874.2021.9542828\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARSO51874.2021.9542828","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A GAN-based Approach to Communicative Gesture Generation for Social Robots
People use a wide range of non-verbal behaviors to signal their intentions in interpersonal relationships. Being echoed by the proven benefits and impact of people's social interaction skills, considerable attention has been paid to generating non-verbal cues for social robots. In particular, communicative gestures help social robots emphasize the thoughts in their speech, describing something or conveying their feelings using bodily movements. This paper introduces a generative framework for producing communicative gestures to better enforce the semantic contents that social robots express. The proposed model is inspired by the Conditional Generative Adversarial Network and built upon a convolutional neural network. The experimental results confirmed that a variety of motions could be generated for expressing input contexts. The framework can produce synthetic actions defined in a high number of upper body joints, allowing social robots to clearly express sophisticated contexts. Indeed, the fully implemented model shows better performance than the one without Action Encoder and Decoder. Finally, the generated motions were transformed into the target robot and combined with the robot's speech, with an expectation of gaining broad social acceptance.