Christian Jestel, H. Surmann, Jonas Stenzel, Oliver Urbann, Marius Brehler
{"title":"Obtaining Robust Control and Navigation Policies for Multi-robot Navigation via Deep Reinforcement Learning","authors":"Christian Jestel, H. Surmann, Jonas Stenzel, Oliver Urbann, Marius Brehler","doi":"10.1109/ICARA51699.2021.9376457","DOIUrl":null,"url":null,"abstract":"Multi-robot navigation is a challenging task in which multiple robots must be coordinated simultaneously within dynamic environments. We apply deep reinforcement learning (DRL) to learn a decentralized end-to-end policy which maps raw sensor data to the command velocities of the agent. In order to enable the policy to generalize, the training is performed in different environments and scenarios. The learned policy is tested and evaluated in common multi-robot scenarios like switching a place, an intersection and a bottleneck situation. This policy allows the agent to recover from dead ends and to navigate through complex environments.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARA51699.2021.9376457","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Multi-robot navigation is a challenging task in which multiple robots must be coordinated simultaneously within dynamic environments. We apply deep reinforcement learning (DRL) to learn a decentralized end-to-end policy which maps raw sensor data to the command velocities of the agent. In order to enable the policy to generalize, the training is performed in different environments and scenarios. The learned policy is tested and evaluated in common multi-robot scenarios like switching a place, an intersection and a bottleneck situation. This policy allows the agent to recover from dead ends and to navigate through complex environments.