基于相机的室内导航在已知环境与ORB视觉障碍的人

Fanghao Song, Zhongen Li, Brian C. Clark, Dustin R. Grooms, Chang Liu
{"title":"基于相机的室内导航在已知环境与ORB视觉障碍的人","authors":"Fanghao Song, Zhongen Li, Brian C. Clark, Dustin R. Grooms, Chang Liu","doi":"10.1109/GHTC46280.2020.9342876","DOIUrl":null,"url":null,"abstract":"Indoor navigation is a hard problem for those with visual impairment or blindness. This paper presents a real-time camera-based indoor navigation application for the blind and visually impaired in known environments. Contrary to other systems for similar purposes, our system executes all computation locally on mobile devices and does not need any infrastructure-based sensor and signal support. The use of the system is divided into two stages: an offline stage, in which a sighted person helps with landmark identification and route planning, and an online stage, in which a visually impaired person can navigate with voice prompts from the app. Predefined landmark images and navigation information are stored in a database in the offline stage. Navigation information consists of the landmark position, the turning angle, the turning direction, and the distance to the next landmark. In the online stage, navigation information is retrieved from the current landmark image by successfully matching video frames in real time. A similarity score is calculated by ORB using Hamming distances. Voice feedback is provided for users with a text-to-speech function. Our system minimizes the error caused by users’ manipulation and improves stability and accuracy by updating similarity scores continuously.","PeriodicalId":314837,"journal":{"name":"2020 IEEE Global Humanitarian Technology Conference (GHTC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Camera-Based Indoor Navigation in Known Environments with ORB for People with Visual Impairment\",\"authors\":\"Fanghao Song, Zhongen Li, Brian C. Clark, Dustin R. Grooms, Chang Liu\",\"doi\":\"10.1109/GHTC46280.2020.9342876\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Indoor navigation is a hard problem for those with visual impairment or blindness. This paper presents a real-time camera-based indoor navigation application for the blind and visually impaired in known environments. Contrary to other systems for similar purposes, our system executes all computation locally on mobile devices and does not need any infrastructure-based sensor and signal support. The use of the system is divided into two stages: an offline stage, in which a sighted person helps with landmark identification and route planning, and an online stage, in which a visually impaired person can navigate with voice prompts from the app. Predefined landmark images and navigation information are stored in a database in the offline stage. Navigation information consists of the landmark position, the turning angle, the turning direction, and the distance to the next landmark. In the online stage, navigation information is retrieved from the current landmark image by successfully matching video frames in real time. A similarity score is calculated by ORB using Hamming distances. Voice feedback is provided for users with a text-to-speech function. Our system minimizes the error caused by users’ manipulation and improves stability and accuracy by updating similarity scores continuously.\",\"PeriodicalId\":314837,\"journal\":{\"name\":\"2020 IEEE Global Humanitarian Technology Conference (GHTC)\",\"volume\":\"52 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Global Humanitarian Technology Conference (GHTC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GHTC46280.2020.9342876\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Global Humanitarian Technology Conference (GHTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GHTC46280.2020.9342876","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

室内导航对那些有视力障碍或失明的人来说是一个难题。本文提出了一种基于实时摄像头的盲人和视障人士在已知环境下的室内导航应用。与其他类似目的的系统相反,我们的系统在移动设备上本地执行所有计算,不需要任何基于基础设施的传感器和信号支持。该系统的使用分为两个阶段:离线阶段,由视力正常的人帮助识别地标和规划路线;在线阶段,视障人士可以通过应用程序的语音提示导航。离线阶段,预定义的地标图像和导航信息存储在数据库中。导航信息包括路标位置、转弯角度、转弯方向以及到下一个路标的距离。在在线阶段,通过实时成功匹配视频帧,从当前地标图像中检索导航信息。ORB使用汉明距离计算相似度得分。具有文字转语音功能,为用户提供语音反馈。我们的系统最大限度地减少了用户操作造成的误差,并通过不断更新相似度分数来提高稳定性和准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Camera-Based Indoor Navigation in Known Environments with ORB for People with Visual Impairment
Indoor navigation is a hard problem for those with visual impairment or blindness. This paper presents a real-time camera-based indoor navigation application for the blind and visually impaired in known environments. Contrary to other systems for similar purposes, our system executes all computation locally on mobile devices and does not need any infrastructure-based sensor and signal support. The use of the system is divided into two stages: an offline stage, in which a sighted person helps with landmark identification and route planning, and an online stage, in which a visually impaired person can navigate with voice prompts from the app. Predefined landmark images and navigation information are stored in a database in the offline stage. Navigation information consists of the landmark position, the turning angle, the turning direction, and the distance to the next landmark. In the online stage, navigation information is retrieved from the current landmark image by successfully matching video frames in real time. A similarity score is calculated by ORB using Hamming distances. Voice feedback is provided for users with a text-to-speech function. Our system minimizes the error caused by users’ manipulation and improves stability and accuracy by updating similarity scores continuously.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信