{"title":"Outdoor Robot Navigation System using Game-Based DQN and Augmented Reality","authors":"Sivapong Nilwong, G. Capi","doi":"10.1109/UR49135.2020.9144838","DOIUrl":null,"url":null,"abstract":"This paper presents a deep reinforcement learning based robot outdoor navigation method using visual information. The deep q network (DQN) maps the visual data to robot action in a goal location reaching task. An advantage of the proposed method is that the implemented DQN is trained in the first-person shooter (FPS) game-based simulated environment provided by ViZDoom. The FPS simulated environment reduces the differences between the training and the real environments resulting in a good performance of trained DQNs. In our implementation a marker-based augmented reality algorithm with a simple object detection method is used to train the DQN. The proposed outdoor navigation system is tested in the simulation and real robot implementation, with no additional training. Experimental results showed that the navigation system trained inside the game-based simulation can guide the real robot in outdoor goal directed navigation tasks.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 17th International Conference on Ubiquitous Robots (UR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UR49135.2020.9144838","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper presents a deep reinforcement learning based robot outdoor navigation method using visual information. The deep q network (DQN) maps the visual data to robot action in a goal location reaching task. An advantage of the proposed method is that the implemented DQN is trained in the first-person shooter (FPS) game-based simulated environment provided by ViZDoom. The FPS simulated environment reduces the differences between the training and the real environments resulting in a good performance of trained DQNs. In our implementation a marker-based augmented reality algorithm with a simple object detection method is used to train the DQN. The proposed outdoor navigation system is tested in the simulation and real robot implementation, with no additional training. Experimental results showed that the navigation system trained inside the game-based simulation can guide the real robot in outdoor goal directed navigation tasks.