Deep Reinforcement Learning Based Online Area Covering Autonomous Robot

Olimpiya Saha, Guohua Ren, Javad Heydari, Viswanath Ganapathy, Mohak Shah
{"title":"Deep Reinforcement Learning Based Online Area Covering Autonomous Robot","authors":"Olimpiya Saha, Guohua Ren, Javad Heydari, Viswanath Ganapathy, Mohak Shah","doi":"10.1109/ICARA51699.2021.9376477","DOIUrl":null,"url":null,"abstract":"Autonomous area covering robots have been increasingly adopted in for diverse applications. In this paper, we investigate the effectiveness of deep reinforcement learning (RL) algorithms for online area coverage while minimizing the overlap. Through simulation experiments in grid based environments and in the Gazebo simulator, we show that Deep Q-Network (DQN) based algorithms efficiently cover unknown indoor environments. Furthermore, through empirical evaluations and theoretical analysis, we demonstrate that DQN with prioritized experience replay (DQN-PER) significantly minimizes the sample complexity while achieving reduced overlap when compared with other DQN variants. In addition, through simulations we demonstrate the performance advantage of DQN-PER over the state-of-the-art area coverage algorithms, BA* and BSA. Our experiments also indicate that a pre-trained RL agent can efficiently cover new unseen environments with minimal additional sample complexity. Finally, we propose a novel way of formulating the state representation to arrive at an area-agnostic RL agent for efficiently covering unknown environments.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARA51699.2021.9376477","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Autonomous area covering robots have been increasingly adopted in for diverse applications. In this paper, we investigate the effectiveness of deep reinforcement learning (RL) algorithms for online area coverage while minimizing the overlap. Through simulation experiments in grid based environments and in the Gazebo simulator, we show that Deep Q-Network (DQN) based algorithms efficiently cover unknown indoor environments. Furthermore, through empirical evaluations and theoretical analysis, we demonstrate that DQN with prioritized experience replay (DQN-PER) significantly minimizes the sample complexity while achieving reduced overlap when compared with other DQN variants. In addition, through simulations we demonstrate the performance advantage of DQN-PER over the state-of-the-art area coverage algorithms, BA* and BSA. Our experiments also indicate that a pre-trained RL agent can efficiently cover new unseen environments with minimal additional sample complexity. Finally, we propose a novel way of formulating the state representation to arrive at an area-agnostic RL agent for efficiently covering unknown environments.
基于深度强化学习的自主机器人在线区域
自治区域覆盖机器人在各种应用中得到越来越多的采用。在本文中,我们研究了深度强化学习(RL)算法在最小化重叠的同时在线区域覆盖的有效性。通过网格环境和Gazebo模拟器的仿真实验,我们证明了基于深度q网络(Deep Q-Network, DQN)的算法能够有效地覆盖未知的室内环境。此外,通过经验评估和理论分析,我们证明了与其他DQN变体相比,具有优先体验重放(DQN- per)的DQN显著地降低了样本复杂性,同时减少了重叠。此外,通过仿真,我们证明了DQN-PER比最先进的区域覆盖算法BA*和BSA的性能优势。我们的实验还表明,预训练的强化学习代理可以有效地覆盖新的未见过的环境,并且具有最小的额外样本复杂性。最后,我们提出了一种新的表述状态表示的方法,以达到有效覆盖未知环境的区域不可知RL代理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信