Floor Plan Based Active Global Localization and Navigation Aid for Persons With Blindness and Low Vision

IF 4.6 2区 计算机科学 Q2 ROBOTICS
R. G. Goswami;H. Sinha;P. V. Amith;J. Hari;P. Krishnamurthy;J. Rizzo;F. Khorrami
{"title":"Floor Plan Based Active Global Localization and Navigation Aid for Persons With Blindness and Low Vision","authors":"R. G. Goswami;H. Sinha;P. V. Amith;J. Hari;P. Krishnamurthy;J. Rizzo;F. Khorrami","doi":"10.1109/LRA.2024.3486208","DOIUrl":null,"url":null,"abstract":"Navigation of an agent, such as a person with blindness or low vision, in an unfamiliar environment poses substantial difficulties, even in scenarios where prior maps, like floor plans, are available. It becomes essential first to determine the agent's pose in the environment. The task's complexity increases when the agent also needs directions for exploring the environment to reduce uncertainty in the agent's position. This problem of \n<italic>active global localization</i>\n typically involves finding a transformation to match the agent's sensor-generated map to the floor plan while providing a series of point-to-point directions for effective exploration. Current methods fall into two categories: learning-based, requiring extensive training for each environment, or non-learning-based, which generally depend on prior knowledge of the agent's initial position, or the use of floor plan maps created with the same sensor modality as the agent. Addressing these limitations, we introduce a novel system for real-time, active global localization and navigation for persons with blindness and low vision. By generating semantically informed real-time goals, our approach enables local exploration and the creation of a 2D semantic point cloud for effective global localization. Moreover, it dynamically corrects for odometry drift using the architectural floor plan, independent of the agent's global position and introduces a new method for real-time loop closure on reversal. Our approach's effectiveness is validated through multiple real-world indoor experiments, also highlighting its adaptability and ease of extension to any mobile robot.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"9 12","pages":"11058-11065"},"PeriodicalIF":4.6000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10734166/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Navigation of an agent, such as a person with blindness or low vision, in an unfamiliar environment poses substantial difficulties, even in scenarios where prior maps, like floor plans, are available. It becomes essential first to determine the agent's pose in the environment. The task's complexity increases when the agent also needs directions for exploring the environment to reduce uncertainty in the agent's position. This problem of active global localization typically involves finding a transformation to match the agent's sensor-generated map to the floor plan while providing a series of point-to-point directions for effective exploration. Current methods fall into two categories: learning-based, requiring extensive training for each environment, or non-learning-based, which generally depend on prior knowledge of the agent's initial position, or the use of floor plan maps created with the same sensor modality as the agent. Addressing these limitations, we introduce a novel system for real-time, active global localization and navigation for persons with blindness and low vision. By generating semantically informed real-time goals, our approach enables local exploration and the creation of a 2D semantic point cloud for effective global localization. Moreover, it dynamically corrects for odometry drift using the architectural floor plan, independent of the agent's global position and introduces a new method for real-time loop closure on reversal. Our approach's effectiveness is validated through multiple real-world indoor experiments, also highlighting its adaptability and ease of extension to any mobile robot.
基于平面图的盲人和低视力者主动全球定位和导航辅助系统
即使是在有诸如平面图之类的先行地图的情况下,要让盲人或低视力者等代理在陌生环境中导航也会遇到很大困难。首先必须确定代理在环境中的姿态。如果代理还需要探索环境的方向,以减少代理位置的不确定性,任务的复杂性就会增加。这种主动全局定位问题通常涉及寻找一种变换,以将代理的传感器生成的地图与平面图相匹配,同时为有效探索提供一系列点到点的方向。目前的方法分为两类:基于学习的方法,需要针对每个环境进行大量训练;非基于学习的方法,通常依赖于事先了解的代理初始位置,或使用与代理相同的传感器模式生成的平面图。针对这些局限性,我们推出了一种新型系统,用于为盲人和低视力者进行实时、主动的全局定位和导航。通过生成具有语义信息的实时目标,我们的方法能够进行局部探索并创建二维语义点云,从而实现有效的全局定位。此外,它还能利用建筑平面图动态校正里程漂移,而不依赖于代理的全局位置,并引入了一种新方法来实时关闭反向循环。我们的方法通过多个真实世界的室内实验验证了其有效性,同时也突出了它的适应性和易于扩展到任何移动机器人的特点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Robotics and Automation Letters
IEEE Robotics and Automation Letters Computer Science-Computer Science Applications
CiteScore
9.60
自引率
15.40%
发文量
1428
期刊介绍: The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信