Demo: A distributed virtual vision simulator

Wiktor Starzyk, Adam Domurad, F. Qureshi
{"title":"Demo: A distributed virtual vision simulator","authors":"Wiktor Starzyk, Adam Domurad, F. Qureshi","doi":"10.1109/ICDSC.2011.6042945","DOIUrl":null,"url":null,"abstract":"Realistic virtual worlds can serve as laboratories for carrying out camera networks research. This unorthodox “Virtual Vision” paradigm advocates developing visually and behaviorally realistic 3D environments to serve the needs of computer vision. Our work on high-level coordination and control in camera networks is a testament to the suitability of virtual vision paradigm for camera networks research. The prerequisite for carrying out virtual vision research is a virtual vision simulator capable of generating synthetic imagery from simulated real-life scenes. We present a distributed, customizable virtual vision simulator capable of simulating pedestrian traffic in a variety of 3D environments. Virtual cameras deployed in this synthetic environment generate synthetic imagery — boasting realistic lighting effects, shadows, etc. — using the state-of-the-art computer graphics techniques. The synthetic imagery is fed into a “real-world” vision pipeline that performs visual analysis — e.g., blob detection and tracking, facial detection, etc. — and returns the results of this analysis to our simulated cameras for subsequent higher level processing. It is important to bear in mind that our vision pipeline is designed to handle real world imagery without any modifications. Consequently, it closely mimics the performance of a vision pipeline that one might deploy on physical cameras. Our virtual vision simulator is realized as a collection of modules that communicate with each other over the network. Consequently, we can deploy our simulator over a network of computers, allowing us to simulate much larger networks and much more complex scenes then is otherwise possible.","PeriodicalId":385052,"journal":{"name":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","volume":"106 10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 Fifth ACM/IEEE International Conference on Distributed Smart Cameras","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSC.2011.6042945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Realistic virtual worlds can serve as laboratories for carrying out camera networks research. This unorthodox “Virtual Vision” paradigm advocates developing visually and behaviorally realistic 3D environments to serve the needs of computer vision. Our work on high-level coordination and control in camera networks is a testament to the suitability of virtual vision paradigm for camera networks research. The prerequisite for carrying out virtual vision research is a virtual vision simulator capable of generating synthetic imagery from simulated real-life scenes. We present a distributed, customizable virtual vision simulator capable of simulating pedestrian traffic in a variety of 3D environments. Virtual cameras deployed in this synthetic environment generate synthetic imagery — boasting realistic lighting effects, shadows, etc. — using the state-of-the-art computer graphics techniques. The synthetic imagery is fed into a “real-world” vision pipeline that performs visual analysis — e.g., blob detection and tracking, facial detection, etc. — and returns the results of this analysis to our simulated cameras for subsequent higher level processing. It is important to bear in mind that our vision pipeline is designed to handle real world imagery without any modifications. Consequently, it closely mimics the performance of a vision pipeline that one might deploy on physical cameras. Our virtual vision simulator is realized as a collection of modules that communicate with each other over the network. Consequently, we can deploy our simulator over a network of computers, allowing us to simulate much larger networks and much more complex scenes then is otherwise possible.
演示:分布式虚拟视觉模拟器
现实的虚拟世界可以作为开展摄像机网络研究的实验室。这种非正统的“虚拟视觉”模式提倡开发视觉和行为逼真的3D环境,以满足计算机视觉的需求。我们在摄像机网络中的高级协调和控制方面的工作证明了虚拟视觉范式对摄像机网络研究的适用性。开展虚拟视觉研究的先决条件是虚拟视觉模拟器能够从模拟的现实场景中生成合成图像。我们提出了一个分布式的、可定制的虚拟视觉模拟器,能够在各种3D环境中模拟行人交通。虚拟摄像机部署在这个合成环境生成合成图像-吹嘘逼真的灯光效果,阴影等-使用最先进的计算机图形技术。合成图像被输入到“真实世界”的视觉管道中,该管道执行视觉分析,例如斑点检测和跟踪,面部检测等,并将分析结果返回到我们的模拟相机中,以便后续进行更高级别的处理。重要的是要记住,我们的视觉管道是为了处理真实世界的图像而设计的,没有任何修改。因此,它非常接近地模仿了可能部署在物理相机上的视觉管道的性能。我们的虚拟视觉模拟器是一个通过网络相互通信的模块集合。因此,我们可以在计算机网络上部署模拟器,使我们能够模拟更大的网络和更复杂的场景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信