Yuheng Wu, K. Takahashi, H. Yamada, Kitae Kim, Shingo Murata, S. Sugano, T. Ogata
{"title":"Dynamic Motion Generation by Flexible-Joint Robot based on Deep Learning using Images","authors":"Yuheng Wu, K. Takahashi, H. Yamada, Kitae Kim, Shingo Murata, S. Sugano, T. Ogata","doi":"10.1109/DEVLRN.2018.8761020","DOIUrl":null,"url":null,"abstract":"Robots with flexible joints have recently been attracting attention from researchers because such robots can passively adapt to environmental changes and realize dynamic motion that uses inertia. In previous research, body-model acquisition using deep learning was proposed and dynamic motion learning was achieved. However, using the end-effector position as a visual feedback signal to train a robot limits what the robot can know to only the relation between the task and itself, instead of the relation between the environment and itself. In this research, we propose to use images as a feedback signal so that the robot can have a sense of the overall situation within the task environment. This motion learning is performed via deep learning using raw image data. In an experiment, we let a robot perform task motions once to acquire motor and image data. Then, we used a convolutional auto-encoder to extract image features from raw image data. The extracted image features were used in combination with motor data to train a recurrent neural network. As a result, motion learning through deep learning from image data allowed the robot to acquire environmental information and conduct tasks that require consideration of environmental changes, making use of its advantage of passive adaptation.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2018.8761020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Robots with flexible joints have recently been attracting attention from researchers because such robots can passively adapt to environmental changes and realize dynamic motion that uses inertia. In previous research, body-model acquisition using deep learning was proposed and dynamic motion learning was achieved. However, using the end-effector position as a visual feedback signal to train a robot limits what the robot can know to only the relation between the task and itself, instead of the relation between the environment and itself. In this research, we propose to use images as a feedback signal so that the robot can have a sense of the overall situation within the task environment. This motion learning is performed via deep learning using raw image data. In an experiment, we let a robot perform task motions once to acquire motor and image data. Then, we used a convolutional auto-encoder to extract image features from raw image data. The extracted image features were used in combination with motor data to train a recurrent neural network. As a result, motion learning through deep learning from image data allowed the robot to acquire environmental information and conduct tasks that require consideration of environmental changes, making use of its advantage of passive adaptation.