Minghan Wei, Daewon Lee, Volkan Isler, Daniel D. Lee
{"title":"Occupancy Map Inpainting for Online Robot Navigation","authors":"Minghan Wei, Daewon Lee, Volkan Isler, Daniel D. Lee","doi":"10.1109/ICRA48506.2021.9561790","DOIUrl":null,"url":null,"abstract":"In this work, we focus on mobile robot navigation in indoor environments where occlusions and field-of-view limitations hinder onboard sensing capabilities. We show that the footprint of a camera mounted on a robot can be drastically improved using learning-based approaches. Specifically, we consider the task of building an occupancy map for autonomous navigation of a robot equipped with a depth camera. In our approach, a local occupancy map is first computed using measurements from the camera directly. Afterwards, an inpainting network adds further information, the occupancy probabilities of unseen grid cells, to the map. A novel aspect of our approach is that rather than direct supervision from ground truth, we combine the information from a second camera with a better field-of-view for supervision. The training focuses on predicting extensions of the sensed data. To test the effectiveness of our approach, we use a robot setup with a single camera placed at 0.5m above the ground. We compare the navigation performance using raw maps from only this camera’s input (baseline) versus using inpainted maps augmented with our network. Our method outperforms the baseline approach even in completely new environments not included in the training set and can yield 21% shorter paths than the baseline approach. A real-time implementation of our method on a mobile robot is also tested in home and office environments.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA48506.2021.9561790","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
In this work, we focus on mobile robot navigation in indoor environments where occlusions and field-of-view limitations hinder onboard sensing capabilities. We show that the footprint of a camera mounted on a robot can be drastically improved using learning-based approaches. Specifically, we consider the task of building an occupancy map for autonomous navigation of a robot equipped with a depth camera. In our approach, a local occupancy map is first computed using measurements from the camera directly. Afterwards, an inpainting network adds further information, the occupancy probabilities of unseen grid cells, to the map. A novel aspect of our approach is that rather than direct supervision from ground truth, we combine the information from a second camera with a better field-of-view for supervision. The training focuses on predicting extensions of the sensed data. To test the effectiveness of our approach, we use a robot setup with a single camera placed at 0.5m above the ground. We compare the navigation performance using raw maps from only this camera’s input (baseline) versus using inpainted maps augmented with our network. Our method outperforms the baseline approach even in completely new environments not included in the training set and can yield 21% shorter paths than the baseline approach. A real-time implementation of our method on a mobile robot is also tested in home and office environments.