Learning to Fuse: A Deep Learning Approach to Visual-Inertial Camera Pose Estimation

J. Rambach, Aditya Tewari, A. Pagani, D. Stricker
{"title":"Learning to Fuse: A Deep Learning Approach to Visual-Inertial Camera Pose Estimation","authors":"J. Rambach, Aditya Tewari, A. Pagani, D. Stricker","doi":"10.1109/ISMAR.2016.19","DOIUrl":null,"url":null,"abstract":"Camera pose estimation is the cornerstone of Augmented Reality applications. Pose tracking based on camera images exclusively has been shown to be sensitive to motion blur, occlusions, and illumination changes. Thus, a lot of work has been conducted over the last years on visual-inertial pose tracking using acceleration and angular velocity measurements from inertial sensors in order to improve the visual tracking. Most proposed systems use statistical filtering techniques to approach the sensor fusion problem, that require complex system modelling and calibrations in order to perform adequately. In this work we present a novel approach to sensor fusion using a deep learning method to learn the relation between camera poses and inertial sensor measurements. A long short-term memory model (LSTM) is trained to provide an estimate of the current pose based on previous poses and inertial measurements. This estimates then appropriately combined with the output of a visual tracking system using a linear Kalman Filter to provide a robust final pose estimate. Our experimental results confirm the applicability and tracking performance improvement gained from the proposed sensor fusion system.","PeriodicalId":146808,"journal":{"name":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"2013 16","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"49","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR.2016.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 49

Abstract

Camera pose estimation is the cornerstone of Augmented Reality applications. Pose tracking based on camera images exclusively has been shown to be sensitive to motion blur, occlusions, and illumination changes. Thus, a lot of work has been conducted over the last years on visual-inertial pose tracking using acceleration and angular velocity measurements from inertial sensors in order to improve the visual tracking. Most proposed systems use statistical filtering techniques to approach the sensor fusion problem, that require complex system modelling and calibrations in order to perform adequately. In this work we present a novel approach to sensor fusion using a deep learning method to learn the relation between camera poses and inertial sensor measurements. A long short-term memory model (LSTM) is trained to provide an estimate of the current pose based on previous poses and inertial measurements. This estimates then appropriately combined with the output of a visual tracking system using a linear Kalman Filter to provide a robust final pose estimate. Our experimental results confirm the applicability and tracking performance improvement gained from the proposed sensor fusion system.
学习融合:视觉惯性相机姿态估计的深度学习方法
相机姿态估计是增强现实应用的基石。基于相机图像的姿态跟踪已经被证明对运动模糊、遮挡和照明变化很敏感。因此,为了改进视觉跟踪,在过去的几年里,人们进行了大量的工作,利用惯性传感器的加速度和角速度测量来进行视觉惯性姿态跟踪。大多数提出的系统使用统计滤波技术来解决传感器融合问题,这需要复杂的系统建模和校准才能充分发挥作用。在这项工作中,我们提出了一种新的传感器融合方法,使用深度学习方法来学习相机姿势和惯性传感器测量之间的关系。训练长短期记忆模型(LSTM),根据先前的姿态和惯性测量提供当前姿态的估计。然后将该估计适当地与使用线性卡尔曼滤波器的视觉跟踪系统的输出相结合,以提供鲁棒的最终姿态估计。实验结果证实了该传感器融合系统的适用性和跟踪性能的提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信