InPosNet: Context Aware DNN for Visual SLAM

Anvaya Rai, B. Lall, Astha Zalani, Raghawendra Prakash Singh, Shikha Srivastava
{"title":"InPosNet: Context Aware DNN for Visual SLAM","authors":"Anvaya Rai, B. Lall, Astha Zalani, Raghawendra Prakash Singh, Shikha Srivastava","doi":"10.1109/IRI58017.2023.00012","DOIUrl":null,"url":null,"abstract":"This paper introduces a novel approach to accurately localize a subject in indoor environments by using the scene images captured from the subject’s mobile phone camera. The objective of this work is to present a novel deep neural network (DNN), called InPosNet, that generates a concise representation of an indoor scene while being able to distinguish between their inherent symmetry. It also enables the user in real time distinction between the images of the same location but captured from different orientations, thereby enabling the user to detect the orientation along with position. A localization accuracy of less than 1 meter from ground truth is achieved and enumerated through the experimental results. The novel DNN presented in the work is motivated by MobileNetv3-Small [2], followed by PCA based feature space transformation. PCA helps in feature space dimensionality reduction and projection of query images onto an optimally dense subspace of the original latent feature space. The goal is to present a vision based system that will have the ability to be used for indoor positioning, without any need for additional infrastructure or external hardware.","PeriodicalId":290818,"journal":{"name":"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRI58017.2023.00012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper introduces a novel approach to accurately localize a subject in indoor environments by using the scene images captured from the subject’s mobile phone camera. The objective of this work is to present a novel deep neural network (DNN), called InPosNet, that generates a concise representation of an indoor scene while being able to distinguish between their inherent symmetry. It also enables the user in real time distinction between the images of the same location but captured from different orientations, thereby enabling the user to detect the orientation along with position. A localization accuracy of less than 1 meter from ground truth is achieved and enumerated through the experimental results. The novel DNN presented in the work is motivated by MobileNetv3-Small [2], followed by PCA based feature space transformation. PCA helps in feature space dimensionality reduction and projection of query images onto an optimally dense subspace of the original latent feature space. The goal is to present a vision based system that will have the ability to be used for indoor positioning, without any need for additional infrastructure or external hardware.
InPosNet:视觉SLAM的上下文感知深度神经网络
本文介绍了一种利用被测者手机相机拍摄的场景图像,在室内环境中对被测者进行精确定位的新方法。这项工作的目的是提出一种新的深度神经网络(DNN),称为InPosNet,它可以生成室内场景的简洁表示,同时能够区分它们固有的对称性。它还使用户能够实时区分同一位置但从不同方向捕获的图像,从而使用户能够在检测方向的同时检测位置。通过实验结果,实现了距离地面真实值小于1米的定位精度。本文提出的新型深度神经网络是由MobileNetv3-Small[2]驱动的,然后是基于PCA的特征空间变换。PCA有助于特征空间降维,并将查询图像投影到原始潜在特征空间的最优密集子空间上。其目标是提出一个基于视觉的系统,该系统将能够用于室内定位,而不需要任何额外的基础设施或外部硬件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信