{"title":"Predictive Models for Robot Ego-Noise Learning and Imitation","authors":"Antonio Pico Villalpando, G. Schillaci, V. Hafner","doi":"10.1109/DEVLRN.2018.8761017","DOIUrl":null,"url":null,"abstract":"We investigate predictive models for robot ego-noise learning and imitation. In particular, we present a framework based on internal models—such as forward and inverse models—that allow a robot to learn how its movements sound like, and to communicate actions to perform to other robots through auditory means. We adopt a developmental approach in the learning of such models, where training sensorimotor data is gathered through self-exploration behaviours. In a simulated experiment presented here, a robot generates specific auditory features from an intended sequence of actions and communicates them for reproduction to another robot, which consequently decodes them into motor commands, using the knowledge of its own motor system. As to the current state, this paper presents an experiment where a robot reproduces auditory sequences previously generated by itself. The presented experiment demonstrates the potentials of the proposed architecture for robot ego-noise learning and for robot communication and imitation through natural means, such as audition. Future work will include situations where different agents use models that are trained with—and thus are specific to—their own self-generated sensorimotor data.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2018.8761017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
We investigate predictive models for robot ego-noise learning and imitation. In particular, we present a framework based on internal models—such as forward and inverse models—that allow a robot to learn how its movements sound like, and to communicate actions to perform to other robots through auditory means. We adopt a developmental approach in the learning of such models, where training sensorimotor data is gathered through self-exploration behaviours. In a simulated experiment presented here, a robot generates specific auditory features from an intended sequence of actions and communicates them for reproduction to another robot, which consequently decodes them into motor commands, using the knowledge of its own motor system. As to the current state, this paper presents an experiment where a robot reproduces auditory sequences previously generated by itself. The presented experiment demonstrates the potentials of the proposed architecture for robot ego-noise learning and for robot communication and imitation through natural means, such as audition. Future work will include situations where different agents use models that are trained with—and thus are specific to—their own self-generated sensorimotor data.