Romain Marie, O. Labbani-Igbida, Pauline Merveilleux, E. Mouaddib
{"title":"Autonomous robot exploration and cognitive map building in unknown environments using omnidirectional visual information only","authors":"Romain Marie, O. Labbani-Igbida, Pauline Merveilleux, E. Mouaddib","doi":"10.1109/WORV.2013.6521937","DOIUrl":null,"url":null,"abstract":"This paper addresses the issues of autonomous exploration and topological mapping using monocular catadioptric vision in fully unknown environments. We propose an incremental process that allows the robot to extract and combine multiple spatial representations built upon its visual information only: free space detection, local space topology extraction, place signatures construction and topological mapping. The efficiency of the proposed system is evaluated in real world experiments. It opens new perspectives for vision-based autonomous exploration, which is still an open problem in robotics.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Workshop on Robot Vision (WORV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WORV.2013.6521937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
This paper addresses the issues of autonomous exploration and topological mapping using monocular catadioptric vision in fully unknown environments. We propose an incremental process that allows the robot to extract and combine multiple spatial representations built upon its visual information only: free space detection, local space topology extraction, place signatures construction and topological mapping. The efficiency of the proposed system is evaluated in real world experiments. It opens new perspectives for vision-based autonomous exploration, which is still an open problem in robotics.