实时RGBD视频流的自动和适应性注册

Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor
{"title":"实时RGBD视频流的自动和适应性注册","authors":"Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor","doi":"10.1145/2822013.2822027","DOIUrl":null,"url":null,"abstract":"We introduce DeReEs-4V, an algorithm that receives two separate RGBD video streams and automatically produces a unified scene through RGBD registration in a few seconds. The motivation behind the solution presented here is to allow game players to place the depth-sensing cameras at arbitrary locations to capture any scene where there is some partial overlap between the parts of the scene captured by the sensors. A typical way to combine partially overlapping views from multiple cameras is through visual calibration using external markers within the field of view of both cameras. Calibration can be time consuming and may require fine tuning, interrupting gameplay. If the cameras are even slightly moved or bumped into, the calibration process typically needs to be repeated from scratch. In this article we demonstrate how RGBD registration can be used to automatically find a 3D viewing transformation to match the view of one camera with respect to the other without calibration while the system is running. To validate this approach, a comparison of our method against standard checkerboard target calibration is provided, with a thorough examination of the system performance under different scenarios. The system presented supports any application that might benefit from a wider operational field-of-view video capture. Our results show that the system is robust to camera movements while simultaneously capturing and registering live point clouds from two depth-sensing cameras.","PeriodicalId":222258,"journal":{"name":"Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Automatic and adaptable registration of live RGBD video streams\",\"authors\":\"Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor\",\"doi\":\"10.1145/2822013.2822027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce DeReEs-4V, an algorithm that receives two separate RGBD video streams and automatically produces a unified scene through RGBD registration in a few seconds. The motivation behind the solution presented here is to allow game players to place the depth-sensing cameras at arbitrary locations to capture any scene where there is some partial overlap between the parts of the scene captured by the sensors. A typical way to combine partially overlapping views from multiple cameras is through visual calibration using external markers within the field of view of both cameras. Calibration can be time consuming and may require fine tuning, interrupting gameplay. If the cameras are even slightly moved or bumped into, the calibration process typically needs to be repeated from scratch. In this article we demonstrate how RGBD registration can be used to automatically find a 3D viewing transformation to match the view of one camera with respect to the other without calibration while the system is running. To validate this approach, a comparison of our method against standard checkerboard target calibration is provided, with a thorough examination of the system performance under different scenarios. The system presented supports any application that might benefit from a wider operational field-of-view video capture. Our results show that the system is robust to camera movements while simultaneously capturing and registering live point clouds from two depth-sensing cameras.\",\"PeriodicalId\":222258,\"journal\":{\"name\":\"Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2822013.2822027\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2822013.2822027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

我们介绍了DeReEs-4V算法,该算法可以接收两个独立的RGBD视频流,并在几秒钟内通过RGBD配位自动生成统一的场景。这里提出的解决方案背后的动机是允许游戏玩家将深度感应相机放置在任意位置,以捕获传感器捕获的场景部分之间存在部分重叠的任何场景。将多个摄像机部分重叠的视图组合在一起的典型方法是在两个摄像机的视场内使用外部标记进行视觉校准。校准非常耗时,可能需要进行微调,从而打断游戏玩法。如果相机稍微移动或碰撞,则通常需要从头开始重复校准过程。在本文中,我们将演示如何使用RGBD配准来自动查找3D观看转换,以便在系统运行时无需校准即可匹配一台相机相对于另一台相机的视图。为了验证此方法,将我们的方法与标准棋盘目标校准进行了比较,并对不同场景下的系统性能进行了全面检查。该系统支持任何可能受益于更大视场视频捕获的应用。结果表明,该系统对摄像机运动具有鲁棒性,同时捕获和注册来自两台深度感测相机的实时点云。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automatic and adaptable registration of live RGBD video streams
We introduce DeReEs-4V, an algorithm that receives two separate RGBD video streams and automatically produces a unified scene through RGBD registration in a few seconds. The motivation behind the solution presented here is to allow game players to place the depth-sensing cameras at arbitrary locations to capture any scene where there is some partial overlap between the parts of the scene captured by the sensors. A typical way to combine partially overlapping views from multiple cameras is through visual calibration using external markers within the field of view of both cameras. Calibration can be time consuming and may require fine tuning, interrupting gameplay. If the cameras are even slightly moved or bumped into, the calibration process typically needs to be repeated from scratch. In this article we demonstrate how RGBD registration can be used to automatically find a 3D viewing transformation to match the view of one camera with respect to the other without calibration while the system is running. To validate this approach, a comparison of our method against standard checkerboard target calibration is provided, with a thorough examination of the system performance under different scenarios. The system presented supports any application that might benefit from a wider operational field-of-view video capture. Our results show that the system is robust to camera movements while simultaneously capturing and registering live point clouds from two depth-sensing cameras.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信