{"title":"Fusion of wifi and vision based on smart devices for indoor localization","authors":"Jing Guo, Shaobo Zhang, Wanqing Zhao, Jinye Peng","doi":"10.1145/3284398.3284401","DOIUrl":null,"url":null,"abstract":"Indoor localization is an important problem with a wide range of applications such as indoor navigation, robot mapping, especially augmented reality(AR). One of most important tasks in AR technology is to estimate the target objects' position information in real environment. The existed AR systems mostly utilize specialized marker to locate, some AR systems track real 3D object in real environment but need to get the the position information of index points in environment in advance. The above methods are not efficiency and limit the application of AR system, so that solving indoor localization problem has significant meaning for the development of AR technology. The development of computer vision (CV) techniques and the ubiquity of intelligent devices with cameras provides the foundation for offering accurate localization services. However, pure CV-based solutions usually involve hundreds of photos and pre-calibration to construct an densely sampled 3D model, which is a labor-intensive overhead for practical deployment. And a large amount of computation cost is difficult to satisfy the requirement for efficiency in mobile device. In this paper, we present iStart, a lightweight, easy deployed, image-based indoor localization system, which can be run on smart phone and VR/AR devices like HTC Vive, Google Glasses and so on. With core techniques rooted in data hierarchy scheme of WiFi fingerprints and photos, iStart also acquires user localization with a single photo of surroundings with high accuracy and short delay. Extensive experiments in various environments show that 90 percentile location deviations are less than 1 m, and 60 percentile location deviations are less than 0.5 m.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3284398.3284401","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Indoor localization is an important problem with a wide range of applications such as indoor navigation, robot mapping, especially augmented reality(AR). One of most important tasks in AR technology is to estimate the target objects' position information in real environment. The existed AR systems mostly utilize specialized marker to locate, some AR systems track real 3D object in real environment but need to get the the position information of index points in environment in advance. The above methods are not efficiency and limit the application of AR system, so that solving indoor localization problem has significant meaning for the development of AR technology. The development of computer vision (CV) techniques and the ubiquity of intelligent devices with cameras provides the foundation for offering accurate localization services. However, pure CV-based solutions usually involve hundreds of photos and pre-calibration to construct an densely sampled 3D model, which is a labor-intensive overhead for practical deployment. And a large amount of computation cost is difficult to satisfy the requirement for efficiency in mobile device. In this paper, we present iStart, a lightweight, easy deployed, image-based indoor localization system, which can be run on smart phone and VR/AR devices like HTC Vive, Google Glasses and so on. With core techniques rooted in data hierarchy scheme of WiFi fingerprints and photos, iStart also acquires user localization with a single photo of surroundings with high accuracy and short delay. Extensive experiments in various environments show that 90 percentile location deviations are less than 1 m, and 60 percentile location deviations are less than 0.5 m.
室内定位是一个重要的问题,具有广泛的应用,如室内导航,机器人地图,特别是增强现实(AR)。在增强现实技术中,最重要的任务之一是对真实环境中目标物体的位置信息进行估计。现有的增强现实系统大多利用专门的标记进行定位,一些增强现实系统在真实环境中跟踪真实的三维物体,但需要提前获取环境中索引点的位置信息。以上方法效率不高,限制了AR系统的应用,因此解决室内定位问题对AR技术的发展具有重要意义。计算机视觉(CV)技术的发展和智能相机设备的普及为提供准确的定位服务提供了基础。然而,纯粹的基于cv的解决方案通常涉及数百张照片和预校准,以构建密集采样的3D模型,这对于实际部署来说是一种劳动密集型的开销。而大量的计算成本难以满足移动设备对效率的要求。在本文中,我们提出了iStart,一个轻量级的,易于部署的,基于图像的室内定位系统,它可以运行在智能手机和VR/AR设备,如HTC Vive,谷歌眼镜等。iStart的核心技术植根于WiFi指纹和照片的数据层次方案,利用单张周围环境照片获得用户定位,精度高,时延短。在各种环境下的大量实验表明,90%的百分位定位偏差小于1 m, 60%的百分位定位偏差小于0.5 m。