Robust registration of virtual objects for industrial applications

B. Fergani, M. Batouche, B. Boufama
{"title":"Robust registration of virtual objects for industrial applications","authors":"B. Fergani, M. Batouche, B. Boufama","doi":"10.1109/ICIT.2004.1490220","DOIUrl":null,"url":null,"abstract":"Vision-based registration techniques for augmented reality systems have been the subject of intensive research recently due to their potential to accurately align virtual objects with the real world. The registration process can be done in three steps: 1) Positioning: put the virtual object at a desired position. 2) Rendering: get the 2D image of the positioned 3D virtual object 3) The rendering of the virtual object will then be merged with the image of real environment to compose an augmented image. Rendering the virtual object is not trivial work, since it not only depends on its 3D location and orientation, but also depends on the camera's location and orientation. When the camera undergoes a transformation, the parameters for rendering will also be accordingly changed. Therefore, for a realtime augmented reality system, the camera pose must be dynamically tracked at frame rate. In order to simulate the real camera, we have to know its intrinsic and extrinsic parameters. The intrinsic parameters describe camera's optical, geometric and digital characteristics. The extrinsic parameters describe position and orientation of the camera. Both intrinsic and extrinsic parameters can be estimated through a camera calibration process (for example Zonglei and Boufama's method). However, calibrating the camera in real-time, for applications similar to industry, is not necessary (for example Kutulakos & vallino's method). In this paper we present our framework based on Zonglei & Boufama's approach and we describe Kutulakos & vallino's approach (we haven't yet results).","PeriodicalId":136064,"journal":{"name":"2004 IEEE International Conference on Industrial Technology, 2004. IEEE ICIT '04.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2004 IEEE International Conference on Industrial Technology, 2004. IEEE ICIT '04.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIT.2004.1490220","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Vision-based registration techniques for augmented reality systems have been the subject of intensive research recently due to their potential to accurately align virtual objects with the real world. The registration process can be done in three steps: 1) Positioning: put the virtual object at a desired position. 2) Rendering: get the 2D image of the positioned 3D virtual object 3) The rendering of the virtual object will then be merged with the image of real environment to compose an augmented image. Rendering the virtual object is not trivial work, since it not only depends on its 3D location and orientation, but also depends on the camera's location and orientation. When the camera undergoes a transformation, the parameters for rendering will also be accordingly changed. Therefore, for a realtime augmented reality system, the camera pose must be dynamically tracked at frame rate. In order to simulate the real camera, we have to know its intrinsic and extrinsic parameters. The intrinsic parameters describe camera's optical, geometric and digital characteristics. The extrinsic parameters describe position and orientation of the camera. Both intrinsic and extrinsic parameters can be estimated through a camera calibration process (for example Zonglei and Boufama's method). However, calibrating the camera in real-time, for applications similar to industry, is not necessary (for example Kutulakos & vallino's method). In this paper we present our framework based on Zonglei & Boufama's approach and we describe Kutulakos & vallino's approach (we haven't yet results).
工业应用中虚拟对象的稳健注册
增强现实系统的基于视觉的配准技术由于具有将虚拟物体与现实世界精确对齐的潜力,最近一直是深入研究的主题。配准过程可分三步完成:1)定位:将虚拟对象置于所需位置。2)渲染:获取定位后的三维虚拟物体的二维图像。3)将虚拟物体的渲染与真实环境图像合并,构成增强图像。渲染虚拟物体不是一件简单的工作,因为它不仅取决于它的3D位置和方向,还取决于相机的位置和方向。当摄像机进行变换时,渲染的参数也会随之改变。因此,对于实时增强现实系统,必须以帧率动态跟踪摄像机姿态。为了模拟真实的摄像机,我们必须知道它的内在参数和外在参数。相机的内在参数描述了相机的光学、几何和数字特性。外部参数描述相机的位置和方向。内部参数和外部参数都可以通过相机标定过程来估计(例如宗雷和布法玛的方法)。然而,对于类似于工业的应用来说,实时校准相机是不必要的(例如Kutulakos和vallino的方法)。在本文中,我们提出了基于Zonglei & Boufama方法的框架,并描述了Kutulakos & vallino的方法(我们还没有结果)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信