{"title":"Deep-Reinforcement-Learning-Based Path Planning for Industrial Robots Using Distance Sensors as Observation","authors":"Teham Bhuiyan, Linh Kästner, Yifan Hu, Benno Kutschank, Jens Lambrecht","doi":"10.1109/ICCRE57112.2023.10155608","DOIUrl":null,"url":null,"abstract":"Traditionally, collision-free path planning for industrial robots is realized by sampling-based algorithms such as RRT (Rapidly-exploring Random Tree), PRM (Probabilistic Roadmap), etc. Sampling-based algorithms require long computation times, especially in complex environments. Furthermore, the environment in which they are employed needs to be known beforehand. When utilizing these approaches in new environments, a tedious engineering effort in setting hyperparameters needs to be conducted, which is time- and cost-intensive. On the other hand, DRL (Deep Reinforcement Learning) has shown remarkable results in dealing with complex environments, generalizing new problem instances, and solving motion planning problems efficiently. On that account, this paper proposes a Deep-Reinforcement-Learning-based motion planner for robotic manipulators. We propose an easily reproducible method to train an agent in randomized scenarios achieving generalization for unknown environments. We evaluated our model against state-of-the-art sampling- and DRL-based planners in several experiments containing static and dynamic obstacles. Results show the adaptability of our agent in new environments and the superiority in terms of path length and execution time compared to conventional methods. Our code is available on GitHub [1].","PeriodicalId":285164,"journal":{"name":"2023 8th International Conference on Control and Robotics Engineering (ICCRE)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 8th International Conference on Control and Robotics Engineering (ICCRE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCRE57112.2023.10155608","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Traditionally, collision-free path planning for industrial robots is realized by sampling-based algorithms such as RRT (Rapidly-exploring Random Tree), PRM (Probabilistic Roadmap), etc. Sampling-based algorithms require long computation times, especially in complex environments. Furthermore, the environment in which they are employed needs to be known beforehand. When utilizing these approaches in new environments, a tedious engineering effort in setting hyperparameters needs to be conducted, which is time- and cost-intensive. On the other hand, DRL (Deep Reinforcement Learning) has shown remarkable results in dealing with complex environments, generalizing new problem instances, and solving motion planning problems efficiently. On that account, this paper proposes a Deep-Reinforcement-Learning-based motion planner for robotic manipulators. We propose an easily reproducible method to train an agent in randomized scenarios achieving generalization for unknown environments. We evaluated our model against state-of-the-art sampling- and DRL-based planners in several experiments containing static and dynamic obstacles. Results show the adaptability of our agent in new environments and the superiority in terms of path length and execution time compared to conventional methods. Our code is available on GitHub [1].