一种提高自主移动机器人自定位精度的新策略

An Zhanfu, Pei Dong, Yong HongWu, Wang Quanzhou
{"title":"一种提高自主移动机器人自定位精度的新策略","authors":"An Zhanfu, Pei Dong, Yong HongWu, Wang Quanzhou","doi":"10.1109/ICOT.2014.6956605","DOIUrl":null,"url":null,"abstract":"We address the problem of precise self-positioning of an autonomous mobile robot. This problem is formulated as a manifold perception algorithm such that the precision position of a mobile robot is evaluated based on the distance from an obstacle, critical features or signs of surroundings and the depth of its surrounding images. We propose to accurately localize the position of a mobile robot using an algorithm that fusing the local plane coordinates information getting from laser ranging and space visual information represented by features of a depth image with variational weights, by which the local distance information of laser ranging and depth vision information are relatively complemented. First, we utilize EKF algorithm on the data gathered by laser to get coarse location of a robot, then open RGB-D camera to capture depth images and we extract SURF features of images, when the features are matched with training examples, the RANSAC algorithm is used to check consistency of spatial structures. Finally, extensive experiments show that our fusion method has significantly improved location results of accuracy compared with the results using either EKF on laser data or SURF features matching on depth images. Especially, experiments with variational fusion weights demonstrated that with this method our robot was capable of accomplishing self-location precisely in real time.","PeriodicalId":343641,"journal":{"name":"2014 International Conference on Orange Technologies","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A new strategy for improving the self-positioning precision of an autonomous mobile robot\",\"authors\":\"An Zhanfu, Pei Dong, Yong HongWu, Wang Quanzhou\",\"doi\":\"10.1109/ICOT.2014.6956605\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We address the problem of precise self-positioning of an autonomous mobile robot. This problem is formulated as a manifold perception algorithm such that the precision position of a mobile robot is evaluated based on the distance from an obstacle, critical features or signs of surroundings and the depth of its surrounding images. We propose to accurately localize the position of a mobile robot using an algorithm that fusing the local plane coordinates information getting from laser ranging and space visual information represented by features of a depth image with variational weights, by which the local distance information of laser ranging and depth vision information are relatively complemented. First, we utilize EKF algorithm on the data gathered by laser to get coarse location of a robot, then open RGB-D camera to capture depth images and we extract SURF features of images, when the features are matched with training examples, the RANSAC algorithm is used to check consistency of spatial structures. Finally, extensive experiments show that our fusion method has significantly improved location results of accuracy compared with the results using either EKF on laser data or SURF features matching on depth images. Especially, experiments with variational fusion weights demonstrated that with this method our robot was capable of accomplishing self-location precisely in real time.\",\"PeriodicalId\":343641,\"journal\":{\"name\":\"2014 International Conference on Orange Technologies\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 International Conference on Orange Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOT.2014.6956605\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on Orange Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOT.2014.6956605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

研究了自主移动机器人的精确自定位问题。这个问题被表述为一种流形感知算法,这样移动机器人的精确位置是基于与障碍物的距离、周围环境的关键特征或标志以及周围图像的深度来评估的。提出了一种将激光测距得到的局部平面坐标信息与变权深度图像特征表示的空间视觉信息融合的算法,使激光测距的局部距离信息与深度视觉信息相对互补,实现移动机器人位置的精确定位。首先利用EKF算法对激光采集的数据进行粗略定位,然后打开RGB-D相机采集深度图像,提取图像的SURF特征,当特征与训练样例匹配后,使用RANSAC算法检查空间结构的一致性。最后,大量的实验表明,与在激光数据上使用EKF或在深度图像上使用SURF特征匹配的结果相比,我们的融合方法显著提高了定位结果的精度。通过变分融合权值的实验表明,该方法能够实时精确地实现机器人的自定位。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A new strategy for improving the self-positioning precision of an autonomous mobile robot
We address the problem of precise self-positioning of an autonomous mobile robot. This problem is formulated as a manifold perception algorithm such that the precision position of a mobile robot is evaluated based on the distance from an obstacle, critical features or signs of surroundings and the depth of its surrounding images. We propose to accurately localize the position of a mobile robot using an algorithm that fusing the local plane coordinates information getting from laser ranging and space visual information represented by features of a depth image with variational weights, by which the local distance information of laser ranging and depth vision information are relatively complemented. First, we utilize EKF algorithm on the data gathered by laser to get coarse location of a robot, then open RGB-D camera to capture depth images and we extract SURF features of images, when the features are matched with training examples, the RANSAC algorithm is used to check consistency of spatial structures. Finally, extensive experiments show that our fusion method has significantly improved location results of accuracy compared with the results using either EKF on laser data or SURF features matching on depth images. Especially, experiments with variational fusion weights demonstrated that with this method our robot was capable of accomplishing self-location precisely in real time.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信