{"title":"Learning from Humans to Generate Communicative Gestures for Social Robots","authors":"Nguyen Tan Viet Tuyen, A. Elibol, N. Chong","doi":"10.1109/UR49135.2020.9144985","DOIUrl":null,"url":null,"abstract":"Non-verbal behaviors play an essential role in human-human interaction, allowing people to convey their intention and attitudes, and affecting social outcomes. Of particular importance in the context of human-robot interaction is that the communicative gestures are expected to endow social robots with the capability of emphasizing its speech, describing something, or showing its intention. In this paper, we propose an approach to learn the relation between human behaviors and natural language based on a Conditional Generative Adversarial Network (CGAN). We demonstrated the validity of our model through a public dataset. The experimental results indicated that the generated human-like gestures correctly convey the meaning of input sentences. The generated gestures were transformed into the target robot’s motion, being the robot’s personalized communicative gestures, which showed significant improvements over the baselines and could be widely accepted and understood by the general public.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 17th International Conference on Ubiquitous Robots (UR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UR49135.2020.9144985","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Non-verbal behaviors play an essential role in human-human interaction, allowing people to convey their intention and attitudes, and affecting social outcomes. Of particular importance in the context of human-robot interaction is that the communicative gestures are expected to endow social robots with the capability of emphasizing its speech, describing something, or showing its intention. In this paper, we propose an approach to learn the relation between human behaviors and natural language based on a Conditional Generative Adversarial Network (CGAN). We demonstrated the validity of our model through a public dataset. The experimental results indicated that the generated human-like gestures correctly convey the meaning of input sentences. The generated gestures were transformed into the target robot’s motion, being the robot’s personalized communicative gestures, which showed significant improvements over the baselines and could be widely accepted and understood by the general public.