Knowledge driven indoor object-goal navigation aid for visually impaired people

IF 1.2 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xuan Hou, Huailin Zhao, Chenxu Wang, Huaping Liu
{"title":"Knowledge driven indoor object-goal navigation aid for visually impaired people","authors":"Xuan Hou,&nbsp;Huailin Zhao,&nbsp;Chenxu Wang,&nbsp;Huaping Liu","doi":"10.1049/ccs2.12061","DOIUrl":null,"url":null,"abstract":"<p>Aiming to help improve quality of life of the visually impaired people, this paper presents a novel wearable aid in the shape of a helmet for helping them find objects in indoor scenes. An object-goal navigation system based on a wearable device is developed, which consists of four modules: object relation prior knowledge (ORPK), perception, decision and feedback. To make the aid also work well in unfamiliar environment, ORPK is used for sub-goal inference to help the user find the target goal. And a method that learns the ORPK from unlabelled images by utilising a scene graph and knowledge graph is proposed. The effectiveness of the aid is demonstrated in real world experiments.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12061","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation and Systems","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ccs2.12061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

Abstract

Aiming to help improve quality of life of the visually impaired people, this paper presents a novel wearable aid in the shape of a helmet for helping them find objects in indoor scenes. An object-goal navigation system based on a wearable device is developed, which consists of four modules: object relation prior knowledge (ORPK), perception, decision and feedback. To make the aid also work well in unfamiliar environment, ORPK is used for sub-goal inference to help the user find the target goal. And a method that learns the ORPK from unlabelled images by utilising a scene graph and knowledge graph is proposed. The effectiveness of the aid is demonstrated in real world experiments.

Abstract Image

知识驱动的视障人士室内目标导航辅助设备
为了帮助视障人士提高生活质量,本文提出了一种新型的头盔形状的可穿戴辅助设备,用于帮助视障人士在室内场景中寻找物体。开发了一种基于可穿戴设备的目标-目标导航系统,该系统由对象关系先验知识(ORPK)、感知、决策和反馈四个模块组成。为了使辅助在不熟悉的环境中也能很好地工作,使用ORPK进行子目标推理,帮助用户找到目标目标。提出了一种利用场景图和知识图从未标记图像中学习ORPK的方法。在现实世界的实验中证明了该援助的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Computation and Systems
Cognitive Computation and Systems Computer Science-Computer Science Applications
CiteScore
2.50
自引率
0.00%
发文量
39
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信