R. G. Goswami;H. Sinha;P. V. Amith;J. Hari;P. Krishnamurthy;J. Rizzo;F. Khorrami
{"title":"基于平面图的盲人和低视力者主动全球定位和导航辅助系统","authors":"R. G. Goswami;H. Sinha;P. V. Amith;J. Hari;P. Krishnamurthy;J. Rizzo;F. Khorrami","doi":"10.1109/LRA.2024.3486208","DOIUrl":null,"url":null,"abstract":"Navigation of an agent, such as a person with blindness or low vision, in an unfamiliar environment poses substantial difficulties, even in scenarios where prior maps, like floor plans, are available. It becomes essential first to determine the agent's pose in the environment. The task's complexity increases when the agent also needs directions for exploring the environment to reduce uncertainty in the agent's position. This problem of \n<italic>active global localization</i>\n typically involves finding a transformation to match the agent's sensor-generated map to the floor plan while providing a series of point-to-point directions for effective exploration. Current methods fall into two categories: learning-based, requiring extensive training for each environment, or non-learning-based, which generally depend on prior knowledge of the agent's initial position, or the use of floor plan maps created with the same sensor modality as the agent. Addressing these limitations, we introduce a novel system for real-time, active global localization and navigation for persons with blindness and low vision. By generating semantically informed real-time goals, our approach enables local exploration and the creation of a 2D semantic point cloud for effective global localization. Moreover, it dynamically corrects for odometry drift using the architectural floor plan, independent of the agent's global position and introduces a new method for real-time loop closure on reversal. Our approach's effectiveness is validated through multiple real-world indoor experiments, also highlighting its adaptability and ease of extension to any mobile robot.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"9 12","pages":"11058-11065"},"PeriodicalIF":4.6000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Floor Plan Based Active Global Localization and Navigation Aid for Persons With Blindness and Low Vision\",\"authors\":\"R. G. Goswami;H. Sinha;P. V. Amith;J. Hari;P. Krishnamurthy;J. Rizzo;F. Khorrami\",\"doi\":\"10.1109/LRA.2024.3486208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Navigation of an agent, such as a person with blindness or low vision, in an unfamiliar environment poses substantial difficulties, even in scenarios where prior maps, like floor plans, are available. It becomes essential first to determine the agent's pose in the environment. The task's complexity increases when the agent also needs directions for exploring the environment to reduce uncertainty in the agent's position. This problem of \\n<italic>active global localization</i>\\n typically involves finding a transformation to match the agent's sensor-generated map to the floor plan while providing a series of point-to-point directions for effective exploration. Current methods fall into two categories: learning-based, requiring extensive training for each environment, or non-learning-based, which generally depend on prior knowledge of the agent's initial position, or the use of floor plan maps created with the same sensor modality as the agent. Addressing these limitations, we introduce a novel system for real-time, active global localization and navigation for persons with blindness and low vision. By generating semantically informed real-time goals, our approach enables local exploration and the creation of a 2D semantic point cloud for effective global localization. Moreover, it dynamically corrects for odometry drift using the architectural floor plan, independent of the agent's global position and introduces a new method for real-time loop closure on reversal. Our approach's effectiveness is validated through multiple real-world indoor experiments, also highlighting its adaptability and ease of extension to any mobile robot.\",\"PeriodicalId\":13241,\"journal\":{\"name\":\"IEEE Robotics and Automation Letters\",\"volume\":\"9 12\",\"pages\":\"11058-11065\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Robotics and Automation Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10734166/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10734166/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
Floor Plan Based Active Global Localization and Navigation Aid for Persons With Blindness and Low Vision
Navigation of an agent, such as a person with blindness or low vision, in an unfamiliar environment poses substantial difficulties, even in scenarios where prior maps, like floor plans, are available. It becomes essential first to determine the agent's pose in the environment. The task's complexity increases when the agent also needs directions for exploring the environment to reduce uncertainty in the agent's position. This problem of
active global localization
typically involves finding a transformation to match the agent's sensor-generated map to the floor plan while providing a series of point-to-point directions for effective exploration. Current methods fall into two categories: learning-based, requiring extensive training for each environment, or non-learning-based, which generally depend on prior knowledge of the agent's initial position, or the use of floor plan maps created with the same sensor modality as the agent. Addressing these limitations, we introduce a novel system for real-time, active global localization and navigation for persons with blindness and low vision. By generating semantically informed real-time goals, our approach enables local exploration and the creation of a 2D semantic point cloud for effective global localization. Moreover, it dynamically corrects for odometry drift using the architectural floor plan, independent of the agent's global position and introduces a new method for real-time loop closure on reversal. Our approach's effectiveness is validated through multiple real-world indoor experiments, also highlighting its adaptability and ease of extension to any mobile robot.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.