{"title":"A new strategy for improving the self-positioning precision of an autonomous mobile robot","authors":"An Zhanfu, Pei Dong, Yong HongWu, Wang Quanzhou","doi":"10.1109/ICOT.2014.6956605","DOIUrl":null,"url":null,"abstract":"We address the problem of precise self-positioning of an autonomous mobile robot. This problem is formulated as a manifold perception algorithm such that the precision position of a mobile robot is evaluated based on the distance from an obstacle, critical features or signs of surroundings and the depth of its surrounding images. We propose to accurately localize the position of a mobile robot using an algorithm that fusing the local plane coordinates information getting from laser ranging and space visual information represented by features of a depth image with variational weights, by which the local distance information of laser ranging and depth vision information are relatively complemented. First, we utilize EKF algorithm on the data gathered by laser to get coarse location of a robot, then open RGB-D camera to capture depth images and we extract SURF features of images, when the features are matched with training examples, the RANSAC algorithm is used to check consistency of spatial structures. Finally, extensive experiments show that our fusion method has significantly improved location results of accuracy compared with the results using either EKF on laser data or SURF features matching on depth images. Especially, experiments with variational fusion weights demonstrated that with this method our robot was capable of accomplishing self-location precisely in real time.","PeriodicalId":343641,"journal":{"name":"2014 International Conference on Orange Technologies","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on Orange Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOT.2014.6956605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
We address the problem of precise self-positioning of an autonomous mobile robot. This problem is formulated as a manifold perception algorithm such that the precision position of a mobile robot is evaluated based on the distance from an obstacle, critical features or signs of surroundings and the depth of its surrounding images. We propose to accurately localize the position of a mobile robot using an algorithm that fusing the local plane coordinates information getting from laser ranging and space visual information represented by features of a depth image with variational weights, by which the local distance information of laser ranging and depth vision information are relatively complemented. First, we utilize EKF algorithm on the data gathered by laser to get coarse location of a robot, then open RGB-D camera to capture depth images and we extract SURF features of images, when the features are matched with training examples, the RANSAC algorithm is used to check consistency of spatial structures. Finally, extensive experiments show that our fusion method has significantly improved location results of accuracy compared with the results using either EKF on laser data or SURF features matching on depth images. Especially, experiments with variational fusion weights demonstrated that with this method our robot was capable of accomplishing self-location precisely in real time.