Online camera pose estimation in partially known and dynamic scenes

G. Bleser, H. Wuest, D. Stricker
{"title":"Online camera pose estimation in partially known and dynamic scenes","authors":"G. Bleser, H. Wuest, D. Stricker","doi":"10.1109/ISMAR.2006.297795","DOIUrl":null,"url":null,"abstract":"One of the key requirements of augmented reality systems is a robust real-time camera pose estimation. In this paper we present a robust approach, which does neither depend on offline pre-processing steps nor on pre-knowledge of the entire target scene. The connection between the real and the virtual world is made by a given CAD model of one object in the scene. However, the model is only needed for initialization. A line model is created out of the object rendered from a given camera pose and registrated onto the image gradient for finding the initial pose. In the tracking phase, the camera is not restricted to the modeled part of the scene anymore. The scene structure is recovered automatically during tracking. Point features are detected in the images and tracked from frame to frame using a brightness invariant template matching algorithm. Several template patches are extracted from different levels of an image pyramid and are used to make the 2D feature tracking capable for large changes in scale. Occlusion is detected already on the 2D feature tracking level. The features' 3D locations are roughly initialized by linear triangulation and then refined recursively over time using techniques of the Extended Kalman Filter framework. A quality manager handles the influence of a feature on the estimation of the camera pose. As structure and pose recovery are always performed under uncertainty, statistical methods for estimating and propagating uncertainty have been incorporated consequently into both processes. Finally, validation results on synthetic as well as on real video sequences are presented.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"663 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"131","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR.2006.297795","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 131

Abstract

One of the key requirements of augmented reality systems is a robust real-time camera pose estimation. In this paper we present a robust approach, which does neither depend on offline pre-processing steps nor on pre-knowledge of the entire target scene. The connection between the real and the virtual world is made by a given CAD model of one object in the scene. However, the model is only needed for initialization. A line model is created out of the object rendered from a given camera pose and registrated onto the image gradient for finding the initial pose. In the tracking phase, the camera is not restricted to the modeled part of the scene anymore. The scene structure is recovered automatically during tracking. Point features are detected in the images and tracked from frame to frame using a brightness invariant template matching algorithm. Several template patches are extracted from different levels of an image pyramid and are used to make the 2D feature tracking capable for large changes in scale. Occlusion is detected already on the 2D feature tracking level. The features' 3D locations are roughly initialized by linear triangulation and then refined recursively over time using techniques of the Extended Kalman Filter framework. A quality manager handles the influence of a feature on the estimation of the camera pose. As structure and pose recovery are always performed under uncertainty, statistical methods for estimating and propagating uncertainty have been incorporated consequently into both processes. Finally, validation results on synthetic as well as on real video sequences are presented.
部分已知和动态场景下的在线相机姿态估计
增强现实系统的关键要求之一是鲁棒的实时相机姿态估计。在本文中,我们提出了一种鲁棒的方法,它既不依赖于离线预处理步骤,也不依赖于整个目标场景的预先知识。现实世界和虚拟世界之间的联系是通过给定场景中一个物体的CAD模型来实现的。然而,该模型仅用于初始化。从给定相机姿态渲染的对象中创建一个线模型,并注册到图像梯度上以查找初始姿态。在跟踪阶段,相机不再局限于场景的建模部分。场景结构在跟踪过程中自动恢复。利用亮度不变模板匹配算法检测图像中的点特征并逐帧跟踪。从图像金字塔的不同层次提取若干模板补丁,用于使2D特征跟踪能够适应规模的大变化。在2D特征跟踪水平上已经检测到遮挡。特征的3D位置通过线性三角测量粗略初始化,然后使用扩展卡尔曼滤波框架的技术随着时间递归地进行细化。质量管理器处理特征对相机姿态估计的影响。由于结构和位姿恢复总是在不确定的情况下进行,因此将估计和传播不确定性的统计方法纳入到这两个过程中。最后给出了在合成视频序列和真实视频序列上的验证结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信