LSFB:用于构建大规模本地化基准的低成本和可扩展框架

Haomin Liu, Mingxuan Jiang, Zhuang Zhang, Xiaopeng Huang, Linsheng Zhao, Meng Hang, Youji Feng, H. Bao, Guofeng Zhang
{"title":"LSFB:用于构建大规模本地化基准的低成本和可扩展框架","authors":"Haomin Liu, Mingxuan Jiang, Zhuang Zhang, Xiaopeng Huang, Linsheng Zhao, Meng Hang, Youji Feng, H. Bao, Guofeng Zhang","doi":"10.1109/ISMAR-Adjunct51615.2020.00065","DOIUrl":null,"url":null,"abstract":"With the rapid development of mobile sensor, network infrastructure and cloud computing, the scale of AR application scenario is expanding from small or medium scale to large-scale environments. Localization in the large-scale environment is a critical demand for the AR applications. Most of the commonly used localization techniques require quite a number of data with groundtruth localization for algorithm benchmarking or model training. The existed groundtruth collection methods can only be used in the outdoors, or require quite expensive equipments or special deployments in the environment, thus are not scalable to large-scale environments or to massively produce a large amount of groundtruth data. In this work, we propose LSFB, a novel low-cost and scalable frame-work to build localization benchmark in large-scale environments with groundtruth poses. The key is to build an accurate HD map of the environment. For each visual-inertial sequence captured in it, the groundtruth poses are obtained by joint optimization taking both the HD map and visual-inertial constraints. The experiments demonstrate the obtained groundtruth poses are accurate enough for AR applications. We use the proposed method to collect a dataset of both mobile phones and AR glass exploring in large-scale environments, and will release the dataset as a new localization benchmark for AR.","PeriodicalId":433361,"journal":{"name":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"LSFB: A Low-cost and Scalable Framework for Building Large-Scale Localization Benchmark\",\"authors\":\"Haomin Liu, Mingxuan Jiang, Zhuang Zhang, Xiaopeng Huang, Linsheng Zhao, Meng Hang, Youji Feng, H. Bao, Guofeng Zhang\",\"doi\":\"10.1109/ISMAR-Adjunct51615.2020.00065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the rapid development of mobile sensor, network infrastructure and cloud computing, the scale of AR application scenario is expanding from small or medium scale to large-scale environments. Localization in the large-scale environment is a critical demand for the AR applications. Most of the commonly used localization techniques require quite a number of data with groundtruth localization for algorithm benchmarking or model training. The existed groundtruth collection methods can only be used in the outdoors, or require quite expensive equipments or special deployments in the environment, thus are not scalable to large-scale environments or to massively produce a large amount of groundtruth data. In this work, we propose LSFB, a novel low-cost and scalable frame-work to build localization benchmark in large-scale environments with groundtruth poses. The key is to build an accurate HD map of the environment. For each visual-inertial sequence captured in it, the groundtruth poses are obtained by joint optimization taking both the HD map and visual-inertial constraints. The experiments demonstrate the obtained groundtruth poses are accurate enough for AR applications. We use the proposed method to collect a dataset of both mobile phones and AR glass exploring in large-scale environments, and will release the dataset as a new localization benchmark for AR.\",\"PeriodicalId\":433361,\"journal\":{\"name\":\"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

随着移动传感器、网络基础设施和云计算的快速发展,增强现实应用场景的规模正在从中小型环境向大型环境扩展。大规模环境下的本地化是AR应用的关键需求。大多数常用的定位技术都需要大量具有真值定位的数据来进行算法基准测试或模型训练。现有的地面真值采集方法只能在室外使用,或者需要相当昂贵的设备或在环境中进行特殊部署,因此无法扩展到大规模环境或大规模产生大量的地面真值数据。在这项工作中,我们提出了一种新的低成本和可扩展的框架LSFB,用于在具有底真姿态的大规模环境中构建定位基准。关键是要建立一个精确的高清环境地图。对于捕获到的每个视惯性序列,采用高清地图和视惯性约束联合优化的方法获得了目标的真姿态。实验结果表明,所得到的真态姿态足够精确,可用于增强现实应用。我们使用该方法收集了在大规模环境中探索的手机和AR眼镜的数据集,并将该数据集作为AR的新定位基准发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
LSFB: A Low-cost and Scalable Framework for Building Large-Scale Localization Benchmark
With the rapid development of mobile sensor, network infrastructure and cloud computing, the scale of AR application scenario is expanding from small or medium scale to large-scale environments. Localization in the large-scale environment is a critical demand for the AR applications. Most of the commonly used localization techniques require quite a number of data with groundtruth localization for algorithm benchmarking or model training. The existed groundtruth collection methods can only be used in the outdoors, or require quite expensive equipments or special deployments in the environment, thus are not scalable to large-scale environments or to massively produce a large amount of groundtruth data. In this work, we propose LSFB, a novel low-cost and scalable frame-work to build localization benchmark in large-scale environments with groundtruth poses. The key is to build an accurate HD map of the environment. For each visual-inertial sequence captured in it, the groundtruth poses are obtained by joint optimization taking both the HD map and visual-inertial constraints. The experiments demonstrate the obtained groundtruth poses are accurate enough for AR applications. We use the proposed method to collect a dataset of both mobile phones and AR glass exploring in large-scale environments, and will release the dataset as a new localization benchmark for AR.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信