{"title":"Model-based reinforcement learning approach for deformable linear object manipulation","authors":"Haifeng Han, G. Paul, Takamitsu Matsubara","doi":"10.1109/COASE.2017.8256194","DOIUrl":null,"url":null,"abstract":"Deformable Linear Object (DLO) manipulation has wide application in industry and in daily life. Conventionally, it is difficult for a robot to manipulate a DLO to achieve the target configuration due to the absence of the universal model that specifies the DLO regardless of the material and environment. Since the state variable of a DLO can be very high dimensional, identifying such a model may require a huge number of samples. Thus, model-based planning of DLO manipulation would be impractical and unreasonable. In this paper, we explore another approach based on reinforcement learning. To this end, our approach is to apply a sample-efficient model-based reinforcement learning method, so-called PILCO [1], to resolve the high dimensional planning problem of DLO manipulation with a reasonable number of samples. To investigate the effectiveness of our approach, we developed an experimental setup with a dual-arm industrial robot and multiple sensors. Then, we conducted experiments to show that our approach is efficient by performing a DLO manipulation task.","PeriodicalId":445441,"journal":{"name":"2017 13th IEEE Conference on Automation Science and Engineering (CASE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 13th IEEE Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COASE.2017.8256194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 24
Abstract
Deformable Linear Object (DLO) manipulation has wide application in industry and in daily life. Conventionally, it is difficult for a robot to manipulate a DLO to achieve the target configuration due to the absence of the universal model that specifies the DLO regardless of the material and environment. Since the state variable of a DLO can be very high dimensional, identifying such a model may require a huge number of samples. Thus, model-based planning of DLO manipulation would be impractical and unreasonable. In this paper, we explore another approach based on reinforcement learning. To this end, our approach is to apply a sample-efficient model-based reinforcement learning method, so-called PILCO [1], to resolve the high dimensional planning problem of DLO manipulation with a reasonable number of samples. To investigate the effectiveness of our approach, we developed an experimental setup with a dual-arm industrial robot and multiple sensors. Then, we conducted experiments to show that our approach is efficient by performing a DLO manipulation task.