{"title":"A Mapless Navigation Method Based on Deep Reinforcement Learning and Path Planning","authors":"Jinzhou Wang, Ran Huang","doi":"10.1109/ROBIO55434.2022.10011923","DOIUrl":null,"url":null,"abstract":"The ability of mobile robots to navigate in an unfamiliar environment in human terms is decisive for their applicability to practical activities. Bearing this view in mind, we propose a novel framework for navigation in settings where the environment is a priori unknown and can only be partially observed by the robot with onboard sensors. The proposed hierarchical navigation solution combines deep reinforcement learning-based perception with model-based control. Specifically, a deep reinforcement learning (DRL) network based on Soft Actor-Critic (SAC) algorithm and Long Short-Term Memory (LSTM) is trained to map the robot's states, 2D lidar inputs and goal position to a series of local waypoints which are optimal in the sense of collision avoidance. The waypoints are then employed by a dynamic window approach (DWA) based planner to generate a smooth and dynamically feasible trajectory that is tracked by using feedback control. The experiments performed on an actual wheeled robot demonstrate that the proposed scheme enables the robot to reach goal locations more reliably and efficiently in unstructured environments in comparison with purely learning based approach.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBIO55434.2022.10011923","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The ability of mobile robots to navigate in an unfamiliar environment in human terms is decisive for their applicability to practical activities. Bearing this view in mind, we propose a novel framework for navigation in settings where the environment is a priori unknown and can only be partially observed by the robot with onboard sensors. The proposed hierarchical navigation solution combines deep reinforcement learning-based perception with model-based control. Specifically, a deep reinforcement learning (DRL) network based on Soft Actor-Critic (SAC) algorithm and Long Short-Term Memory (LSTM) is trained to map the robot's states, 2D lidar inputs and goal position to a series of local waypoints which are optimal in the sense of collision avoidance. The waypoints are then employed by a dynamic window approach (DWA) based planner to generate a smooth and dynamically feasible trajectory that is tracked by using feedback control. The experiments performed on an actual wheeled robot demonstrate that the proposed scheme enables the robot to reach goal locations more reliably and efficiently in unstructured environments in comparison with purely learning based approach.