Sequential Monte Carlo localization in topometric appearance maps

IF 7.5 1区 计算机科学 Q1 ROBOTICS
Alberto Jaenal, Francisco-Angel Moreno, J. Gonzalez-Jimenez
{"title":"Sequential Monte Carlo localization in topometric appearance maps","authors":"Alberto Jaenal, Francisco-Angel Moreno, J. Gonzalez-Jimenez","doi":"10.1177/02783649231197723","DOIUrl":null,"url":null,"abstract":"Representing the scene appearance by a global image descriptor (BoW, NetVLAD, etc.) is a widely adopted choice to address Visual Place Recognition (VPR). The main reasons are that appearance descriptors can be effectively provided with radiometric and perspective invariances as well as they can deal with large environments because of their compactness. However, addressing metric localization with such descriptors (a problem called Appearance-based Localization or AbL) achieves much poorer accuracy than those techniques exploiting the observation of 3D landmarks, which represent the standard for visual localization. In this paper, we propose ALLOM (Appearance-based Localization with Local Observation Models) which addresses AbL by leveraging the topological location of a robot within a map to achieve accurate metric estimations. This topology-assisted metric localization is implemented with a sequential Monte Carlo Bayesian filter that applies a specific observation model for each different place of the environment, thus taking advantage of the local correlation between the pose and the appearance descriptor within each region. ALLOM also benefits from the topological structure of the map to detect eventual robot loss-of-tracking and to effectively cope with its relocalization by applying VPR. Our proposal demonstrates superior metric localization capability compared to different state-of-the-art AbL methods under a wide range of situations.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Robotics Research","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1177/02783649231197723","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Representing the scene appearance by a global image descriptor (BoW, NetVLAD, etc.) is a widely adopted choice to address Visual Place Recognition (VPR). The main reasons are that appearance descriptors can be effectively provided with radiometric and perspective invariances as well as they can deal with large environments because of their compactness. However, addressing metric localization with such descriptors (a problem called Appearance-based Localization or AbL) achieves much poorer accuracy than those techniques exploiting the observation of 3D landmarks, which represent the standard for visual localization. In this paper, we propose ALLOM (Appearance-based Localization with Local Observation Models) which addresses AbL by leveraging the topological location of a robot within a map to achieve accurate metric estimations. This topology-assisted metric localization is implemented with a sequential Monte Carlo Bayesian filter that applies a specific observation model for each different place of the environment, thus taking advantage of the local correlation between the pose and the appearance descriptor within each region. ALLOM also benefits from the topological structure of the map to detect eventual robot loss-of-tracking and to effectively cope with its relocalization by applying VPR. Our proposal demonstrates superior metric localization capability compared to different state-of-the-art AbL methods under a wide range of situations.
地形图外观图的顺序蒙特卡罗定位
用全局图像描述符(BoW, NetVLAD等)表示场景外观是解决视觉位置识别(VPR)的一种广泛采用的选择。主要原因是外观描述符可以有效地提供辐射和透视不变性,并且由于它们的紧凑性,它们可以处理大型环境。然而,使用这种描述符(一个称为基于外观的定位或AbL的问题)解决度量定位的准确性远远低于那些利用3D地标观察的技术,后者代表了视觉定位的标准。在本文中,我们提出了allm(基于局部观测模型的基于外观的定位),它通过利用机器人在地图中的拓扑位置来实现精确的度量估计来解决AbL问题。这种拓扑辅助度量定位是通过顺序蒙特卡罗贝叶斯滤波器实现的,该滤波器对环境的每个不同位置应用特定的观测模型,从而利用每个区域内姿态和外观描述符之间的局部相关性。allm还受益于地图的拓扑结构来检测机器人最终的丢失跟踪,并通过应用VPR有效地处理其重新定位。与不同的最先进的AbL方法相比,我们的方案在广泛的情况下展示了优越的度量定位能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Robotics Research
International Journal of Robotics Research 工程技术-机器人学
CiteScore
22.20
自引率
0.00%
发文量
34
审稿时长
6-12 weeks
期刊介绍: The International Journal of Robotics Research (IJRR) has been a leading peer-reviewed publication in the field for over two decades. It holds the distinction of being the first scholarly journal dedicated to robotics research. IJRR presents cutting-edge and thought-provoking original research papers, articles, and reviews that delve into groundbreaking trends, technical advancements, and theoretical developments in robotics. Renowned scholars and practitioners contribute to its content, offering their expertise and insights. This journal covers a wide range of topics, going beyond narrow technical advancements to encompass various aspects of robotics. The primary aim of IJRR is to publish work that has lasting value for the scientific and technological advancement of the field. Only original, robust, and practical research that can serve as a foundation for further progress is considered for publication. The focus is on producing content that will remain valuable and relevant over time. In summary, IJRR stands as a prestigious publication that drives innovation and knowledge in robotics research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信