基于全向图像的基于外观和特征的视觉里程计

IF 1.4 Q4 ROBOTICS
David García, L. F. Rojo, A. G. Aparicio, L. P. Castelló, Ó. R. García
{"title":"基于全向图像的基于外观和特征的视觉里程计","authors":"David García, L. F. Rojo, A. G. Aparicio, L. P. Castelló, Ó. R. García","doi":"10.1155/2012/797063","DOIUrl":null,"url":null,"abstract":"In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.","PeriodicalId":51834,"journal":{"name":"Journal of Robotics","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2012-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2012/797063","citationCount":"27","resultStr":"{\"title\":\"Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images\",\"authors\":\"David García, L. F. Rojo, A. G. Aparicio, L. P. Castelló, Ó. R. García\",\"doi\":\"10.1155/2012/797063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.\",\"PeriodicalId\":51834,\"journal\":{\"name\":\"Journal of Robotics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2012-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1155/2012/797063\",\"citationCount\":\"27\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1155/2012/797063\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2012/797063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 27

摘要

在移动自主机器人领域,视觉里程计需要仅通过相机传感器检索机器人两个连续姿态之间的运动变换。视觉里程计为定位和SLAM(同时定位和映射)等问题的轨迹估计提供了必要的信息。在这项工作中,我们提出了一种基于单个全向相机的运动估计方法。我们利用了这台相机提供的最大水平视野,这使我们能够将大型场景信息编码到同一张图像中。由于只需要处理两个连续的全向图像,因此两个姿态之间的运动变换的估计是增量计算的。特别是,我们利用全向图像收集的信息的多功能性来执行基于外观和基于特征的方法来获得视觉里程计结果。我们在真实的室内环境中进行了一组实验,以检验这两种方法的有效性和适用性。实验中使用的数据包括在三种不同的真实场景中沿着机器人轨迹捕获的大量全方位图像。实验结果证明了估计的准确性和两种方法的实时性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images
In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of the motion transformation between two poses is incrementally computed, since only the processing of two consecutive omnidirectional images is required. Particularly, we exploited the versatility of the information gathered by omnidirectional images to perform both an appearance-based and a feature-based method to obtain visual odometry results. We carried out a set of experiments in real indoor environments to test the validity and suitability of both methods. The data used in the experiments consists of a large sets of omnidirectional images captured along the robot's trajectory in three different real scenarios. Experimental results demonstrate the accuracy of the estimations and the capability of both methods to work in real-time.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.70
自引率
5.60%
发文量
77
审稿时长
22 weeks
期刊介绍: Journal of Robotics publishes papers on all aspects automated mechanical devices, from their design and fabrication, to their testing and practical implementation. The journal welcomes submissions from the associated fields of materials science, electrical and computer engineering, and machine learning and artificial intelligence, that contribute towards advances in the technology and understanding of robotic systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信