Gonzalo Mier, F. Caballero, Keisuke Nakamura, L. Merino, R. Gomez
{"title":"Generation of expressive motions for a tabletop robot interpolating from hand-made animations","authors":"Gonzalo Mier, F. Caballero, Keisuke Nakamura, L. Merino, R. Gomez","doi":"10.1109/RO-MAN46459.2019.8956246","DOIUrl":null,"url":null,"abstract":"Motion is an important modality for human-robot interaction. Besides a fundamental component to carry out tasks, through motion a robot can express intentions and expressions as well. In this paper, we focus on a tabletop robot in which motion, among other modalities, is used to convey expressions. The robot incorporates a set of pre-programmed motion animations that show different expressions with various intensities. These have been created by designers with expertise in animation. The objective in the paper is to analyze if these examples can be used as demonstrations, and combined by the robot to generate additional richer expressions. Challenges are the representation space used, and the scarce number of examples. The paper compares three different learning from demonstration approaches for the task at hand. A user study is presented to evaluate the resultant new expressive motions automatically generated by combining previous demonstrations.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN46459.2019.8956246","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Motion is an important modality for human-robot interaction. Besides a fundamental component to carry out tasks, through motion a robot can express intentions and expressions as well. In this paper, we focus on a tabletop robot in which motion, among other modalities, is used to convey expressions. The robot incorporates a set of pre-programmed motion animations that show different expressions with various intensities. These have been created by designers with expertise in animation. The objective in the paper is to analyze if these examples can be used as demonstrations, and combined by the robot to generate additional richer expressions. Challenges are the representation space used, and the scarce number of examples. The paper compares three different learning from demonstration approaches for the task at hand. A user study is presented to evaluate the resultant new expressive motions automatically generated by combining previous demonstrations.