Olimpiya Saha, Guohua Ren, Javad Heydari, Viswanath Ganapathy, Mohak Shah
{"title":"Deep Reinforcement Learning Based Online Area Covering Autonomous Robot","authors":"Olimpiya Saha, Guohua Ren, Javad Heydari, Viswanath Ganapathy, Mohak Shah","doi":"10.1109/ICARA51699.2021.9376477","DOIUrl":null,"url":null,"abstract":"Autonomous area covering robots have been increasingly adopted in for diverse applications. In this paper, we investigate the effectiveness of deep reinforcement learning (RL) algorithms for online area coverage while minimizing the overlap. Through simulation experiments in grid based environments and in the Gazebo simulator, we show that Deep Q-Network (DQN) based algorithms efficiently cover unknown indoor environments. Furthermore, through empirical evaluations and theoretical analysis, we demonstrate that DQN with prioritized experience replay (DQN-PER) significantly minimizes the sample complexity while achieving reduced overlap when compared with other DQN variants. In addition, through simulations we demonstrate the performance advantage of DQN-PER over the state-of-the-art area coverage algorithms, BA* and BSA. Our experiments also indicate that a pre-trained RL agent can efficiently cover new unseen environments with minimal additional sample complexity. Finally, we propose a novel way of formulating the state representation to arrive at an area-agnostic RL agent for efficiently covering unknown environments.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARA51699.2021.9376477","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Autonomous area covering robots have been increasingly adopted in for diverse applications. In this paper, we investigate the effectiveness of deep reinforcement learning (RL) algorithms for online area coverage while minimizing the overlap. Through simulation experiments in grid based environments and in the Gazebo simulator, we show that Deep Q-Network (DQN) based algorithms efficiently cover unknown indoor environments. Furthermore, through empirical evaluations and theoretical analysis, we demonstrate that DQN with prioritized experience replay (DQN-PER) significantly minimizes the sample complexity while achieving reduced overlap when compared with other DQN variants. In addition, through simulations we demonstrate the performance advantage of DQN-PER over the state-of-the-art area coverage algorithms, BA* and BSA. Our experiments also indicate that a pre-trained RL agent can efficiently cover new unseen environments with minimal additional sample complexity. Finally, we propose a novel way of formulating the state representation to arrive at an area-agnostic RL agent for efficiently covering unknown environments.