360°相机平台的图像系统仿真

Trisha Lian, J. Farrell, B. Wandell
{"title":"360°相机平台的图像系统仿真","authors":"Trisha Lian, J. Farrell, B. Wandell","doi":"10.2352/ISSN.2470-1173.2018.05.PMII-353","DOIUrl":null,"url":null,"abstract":"Camera arrays are used to acquire the 360◦ surround video data presented on 3D immersive displays. The design of these arrays involves a large number of decisions ranging from the placement and orientation of the cameras to the choice of lenses and sensors. We implemented an open-source software environment (iset360) to support engineers designing and evaluating camera arrays for virtual and augmented reality applications. The software uses physically based ray tracing to simulate a 3D virtual spectral scene and traces these rays through multi-element spherical lenses to calculate the irradiance at the imaging sensor. The software then simulates imaging sensors to predict the captured images. The sensor data can be processed to produce the stereo and monoscopic 360◦ panoramas commonly used in virtual reality applications. By simulating the entire capture pipeline, we can visualize how changes in the system components influence the system performance. We demonstrate the use of the software by simulating a variety of different camera rigs, including the Facebook Surround360, the GoPro Odyssey, the GoPro Omni, and the Samsung Gear 360. Introduction Head mounted visual displays can provide a compelling and immersive experience of a three-dimensional scene. Because the experience can be very impactful, there is a great deal of interest in developing applications ranging from clinical medicine, behavioral change, entertainment, education and experience-sharing [1] [2]. In some applications, computer graphics is used to generate content, providing a realistic, but not real, experience (e.g., video games). In other applications, the content is acquired from a real event (e.g., sports, concerts, news, or family gathering) using camera arrays (rigs) and subsequent extensive image processing that capture and render the environment (Figure 1). The design of these rigs involves many different engineering decisions, including the selection of lenses, sensors, and camera positions. In addition to the rig, there are many choices of how to store and process the acquired content. For example, data from multiple cameras are often transformed into a stereo pair of 360◦ panoramas [3] by stitching together images captured by multiple cameras. Based on the user’s head position and orientation, data are extracted from the panorama and rendered on a head mounted display. There is no single quality-limiting element of this system, and moreover, interactions between the hardware and software design choices limit how well metrics of individual components predict overall system quality. To create a good experience, we must be able to assess the combination of hardware and software components that comprise the entire system. Building and testing a complete rig is costly and slow; hence, it can be useful to obtain guidance about design choices by using Figure 1. Overview of the hardware and software components that combine in an camera rig for an immersive head-mounted display application. (A) The simulation includes a 3D spectral scene, the camera rig definition, and the individual camera specifications. This simulation produces a set of image outputs. (B) The images are then processed by a series of software algorithms. In this case, we show a pipeline that produces an intermediate panorama representation and the viewport calculations that render an image dependent on the users head position. a simulation of the system. This paper describes software tools that simulate controlled 3D realistic scenes and image acquisition systems, in order to generate images produced by specific hardware choices. These images are the inputs to the stitching and rendering algorithms. The simulation enables engineers to explore the impact of different design choices on the entire imaging system, including realistic scenes, hardware components, and post-processing algorithms. Software Implementation The iset360 software, which models the image capture pipeline of 360 camera rigs, has portions in MATLAB and portions in C++. The simulation software is freely available in three repositories within the ISET GitHub project: https://github.com/ISET 1. Figure 2 and Figure 3 summarize the initial stages of the workflow. The first portion of the code creates realistic 3D scenes and calculates the expected sensor irradiance given a lens description. To do so, we start with a 3D, virtual scene that is constructed using 3D modeling software (e.g. Blender or Maya). The scene is converted into a format compatible with PBRT [4], which is implemented in C++. PBRT is a quantitative computer graphics tool that we use to calculate the irradiance at the sensor as light travels from the 3D scene, through the lens, and onto the sensor surface. We augmented the PBRT code to return multispectral images, model lens diffraction and simulate light fields [5]. To promote platform in1The three repositories are iset360, iset3d, and isetcam","PeriodicalId":309050,"journal":{"name":"Photography, Mobile, and Immersive Imaging","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Image Systems Simulation for 360° Camera Rigs\",\"authors\":\"Trisha Lian, J. Farrell, B. Wandell\",\"doi\":\"10.2352/ISSN.2470-1173.2018.05.PMII-353\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Camera arrays are used to acquire the 360◦ surround video data presented on 3D immersive displays. The design of these arrays involves a large number of decisions ranging from the placement and orientation of the cameras to the choice of lenses and sensors. We implemented an open-source software environment (iset360) to support engineers designing and evaluating camera arrays for virtual and augmented reality applications. The software uses physically based ray tracing to simulate a 3D virtual spectral scene and traces these rays through multi-element spherical lenses to calculate the irradiance at the imaging sensor. The software then simulates imaging sensors to predict the captured images. The sensor data can be processed to produce the stereo and monoscopic 360◦ panoramas commonly used in virtual reality applications. By simulating the entire capture pipeline, we can visualize how changes in the system components influence the system performance. We demonstrate the use of the software by simulating a variety of different camera rigs, including the Facebook Surround360, the GoPro Odyssey, the GoPro Omni, and the Samsung Gear 360. Introduction Head mounted visual displays can provide a compelling and immersive experience of a three-dimensional scene. Because the experience can be very impactful, there is a great deal of interest in developing applications ranging from clinical medicine, behavioral change, entertainment, education and experience-sharing [1] [2]. In some applications, computer graphics is used to generate content, providing a realistic, but not real, experience (e.g., video games). In other applications, the content is acquired from a real event (e.g., sports, concerts, news, or family gathering) using camera arrays (rigs) and subsequent extensive image processing that capture and render the environment (Figure 1). The design of these rigs involves many different engineering decisions, including the selection of lenses, sensors, and camera positions. In addition to the rig, there are many choices of how to store and process the acquired content. For example, data from multiple cameras are often transformed into a stereo pair of 360◦ panoramas [3] by stitching together images captured by multiple cameras. Based on the user’s head position and orientation, data are extracted from the panorama and rendered on a head mounted display. There is no single quality-limiting element of this system, and moreover, interactions between the hardware and software design choices limit how well metrics of individual components predict overall system quality. To create a good experience, we must be able to assess the combination of hardware and software components that comprise the entire system. Building and testing a complete rig is costly and slow; hence, it can be useful to obtain guidance about design choices by using Figure 1. Overview of the hardware and software components that combine in an camera rig for an immersive head-mounted display application. (A) The simulation includes a 3D spectral scene, the camera rig definition, and the individual camera specifications. This simulation produces a set of image outputs. (B) The images are then processed by a series of software algorithms. In this case, we show a pipeline that produces an intermediate panorama representation and the viewport calculations that render an image dependent on the users head position. a simulation of the system. This paper describes software tools that simulate controlled 3D realistic scenes and image acquisition systems, in order to generate images produced by specific hardware choices. These images are the inputs to the stitching and rendering algorithms. The simulation enables engineers to explore the impact of different design choices on the entire imaging system, including realistic scenes, hardware components, and post-processing algorithms. Software Implementation The iset360 software, which models the image capture pipeline of 360 camera rigs, has portions in MATLAB and portions in C++. The simulation software is freely available in three repositories within the ISET GitHub project: https://github.com/ISET 1. Figure 2 and Figure 3 summarize the initial stages of the workflow. The first portion of the code creates realistic 3D scenes and calculates the expected sensor irradiance given a lens description. To do so, we start with a 3D, virtual scene that is constructed using 3D modeling software (e.g. Blender or Maya). The scene is converted into a format compatible with PBRT [4], which is implemented in C++. PBRT is a quantitative computer graphics tool that we use to calculate the irradiance at the sensor as light travels from the 3D scene, through the lens, and onto the sensor surface. We augmented the PBRT code to return multispectral images, model lens diffraction and simulate light fields [5]. To promote platform in1The three repositories are iset360, iset3d, and isetcam\",\"PeriodicalId\":309050,\"journal\":{\"name\":\"Photography, Mobile, and Immersive Imaging\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-01-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photography, Mobile, and Immersive Imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-353\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photography, Mobile, and Immersive Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

相机阵列用于获取360度环绕视频数据呈现在3D沉浸式显示器上。这些阵列的设计涉及大量的决策,从相机的位置和方向到镜头和传感器的选择。我们实现了一个开源软件环境(iset360),以支持工程师设计和评估虚拟和增强现实应用的相机阵列。该软件使用基于物理的光线追踪来模拟3D虚拟光谱场景,并通过多元素球面透镜跟踪这些光线,以计算成像传感器的辐照度。然后,该软件模拟成像传感器来预测捕获的图像。传感器数据可以处理,以产生立体和单视角360度全景,通常用于虚拟现实应用。通过模拟整个捕获管道,我们可以可视化系统组件的变化是如何影响系统性能的。我们通过模拟各种不同的相机平台来演示该软件的使用,包括Facebook Surround360、GoPro Odyssey、GoPro Omni和三星Gear 360。头戴式视觉显示器可以提供引人注目的沉浸式三维场景体验。由于这种体验可能非常有影响力,因此人们对开发从临床医学、行为改变、娱乐、教育和经验分享等领域的应用非常感兴趣[1][2]。在某些应用中,计算机图形被用来生成内容,提供逼真但不真实的体验(例如,视频游戏)。在其他应用中,内容是从真实事件(例如,体育,音乐会,新闻或家庭聚会)中获取的,使用相机阵列(钻机)和随后的广泛图像处理来捕获和渲染环境(图1)。这些钻机的设计涉及许多不同的工程决策,包括镜头,传感器和相机位置的选择。除了钻机之外,如何存储和处理获取的内容还有许多选择。例如,通过将多个摄像机捕获的图像拼接在一起,通常将来自多个摄像机的数据转换为360度全景的立体对[3]。根据用户头部的位置和方向,从全景图中提取数据并呈现在头戴式显示器上。这个系统中没有单一的质量限制元素,而且,硬件和软件设计选择之间的交互限制了单个组件的度量如何很好地预测整个系统的质量。为了创造良好的体验,我们必须能够评估组成整个系统的硬件和软件组件的组合。建造和测试一个完整的钻井平台既昂贵又缓慢;因此,使用图1获得有关设计选择的指导是很有用的。概述了用于沉浸式头戴式显示应用程序的相机钻机中的硬件和软件组件。(A)模拟包括一个3D光谱场景,相机钻机的定义,和个别相机的规格。这个模拟产生一组图像输出。(B)然后这些图像被一系列的软件算法处理。在这种情况下,我们展示了一个生成中间全景表示的管道,以及根据用户头部位置渲染图像的视口计算。系统的仿真。本文介绍了模拟受控三维真实场景和图像采集系统的软件工具,以生成特定硬件选择产生的图像。这些图像是拼接和渲染算法的输入。仿真使工程师能够探索不同设计选择对整个成像系统的影响,包括真实场景、硬件组件和后处理算法。软件实现iset360软件对360摄像机的图像采集流水线进行建模,该软件由MATLAB和c++两部分组成。模拟软件在ISET GitHub项目中的三个存储库中免费提供:https://github.com/ISET 1。图2和图3总结了工作流的初始阶段。代码的第一部分创建逼真的3D场景,并计算给定镜头描述的预期传感器辐照度。为此,我们从使用3D建模软件(例如Blender或Maya)构建的3D虚拟场景开始。场景被转换成与PBRT兼容的格式[4],用c++实现。PBRT是一个定量的计算机图形工具,我们用它来计算传感器的辐照度,因为光从3D场景传播,通过镜头,并到传感器表面。我们增强了PBRT代码以返回多光谱图像,模拟透镜衍射并模拟光场[5]。这三个存储库分别是iset360、iset3d和isetcam
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Image Systems Simulation for 360° Camera Rigs
Camera arrays are used to acquire the 360◦ surround video data presented on 3D immersive displays. The design of these arrays involves a large number of decisions ranging from the placement and orientation of the cameras to the choice of lenses and sensors. We implemented an open-source software environment (iset360) to support engineers designing and evaluating camera arrays for virtual and augmented reality applications. The software uses physically based ray tracing to simulate a 3D virtual spectral scene and traces these rays through multi-element spherical lenses to calculate the irradiance at the imaging sensor. The software then simulates imaging sensors to predict the captured images. The sensor data can be processed to produce the stereo and monoscopic 360◦ panoramas commonly used in virtual reality applications. By simulating the entire capture pipeline, we can visualize how changes in the system components influence the system performance. We demonstrate the use of the software by simulating a variety of different camera rigs, including the Facebook Surround360, the GoPro Odyssey, the GoPro Omni, and the Samsung Gear 360. Introduction Head mounted visual displays can provide a compelling and immersive experience of a three-dimensional scene. Because the experience can be very impactful, there is a great deal of interest in developing applications ranging from clinical medicine, behavioral change, entertainment, education and experience-sharing [1] [2]. In some applications, computer graphics is used to generate content, providing a realistic, but not real, experience (e.g., video games). In other applications, the content is acquired from a real event (e.g., sports, concerts, news, or family gathering) using camera arrays (rigs) and subsequent extensive image processing that capture and render the environment (Figure 1). The design of these rigs involves many different engineering decisions, including the selection of lenses, sensors, and camera positions. In addition to the rig, there are many choices of how to store and process the acquired content. For example, data from multiple cameras are often transformed into a stereo pair of 360◦ panoramas [3] by stitching together images captured by multiple cameras. Based on the user’s head position and orientation, data are extracted from the panorama and rendered on a head mounted display. There is no single quality-limiting element of this system, and moreover, interactions between the hardware and software design choices limit how well metrics of individual components predict overall system quality. To create a good experience, we must be able to assess the combination of hardware and software components that comprise the entire system. Building and testing a complete rig is costly and slow; hence, it can be useful to obtain guidance about design choices by using Figure 1. Overview of the hardware and software components that combine in an camera rig for an immersive head-mounted display application. (A) The simulation includes a 3D spectral scene, the camera rig definition, and the individual camera specifications. This simulation produces a set of image outputs. (B) The images are then processed by a series of software algorithms. In this case, we show a pipeline that produces an intermediate panorama representation and the viewport calculations that render an image dependent on the users head position. a simulation of the system. This paper describes software tools that simulate controlled 3D realistic scenes and image acquisition systems, in order to generate images produced by specific hardware choices. These images are the inputs to the stitching and rendering algorithms. The simulation enables engineers to explore the impact of different design choices on the entire imaging system, including realistic scenes, hardware components, and post-processing algorithms. Software Implementation The iset360 software, which models the image capture pipeline of 360 camera rigs, has portions in MATLAB and portions in C++. The simulation software is freely available in three repositories within the ISET GitHub project: https://github.com/ISET 1. Figure 2 and Figure 3 summarize the initial stages of the workflow. The first portion of the code creates realistic 3D scenes and calculates the expected sensor irradiance given a lens description. To do so, we start with a 3D, virtual scene that is constructed using 3D modeling software (e.g. Blender or Maya). The scene is converted into a format compatible with PBRT [4], which is implemented in C++. PBRT is a quantitative computer graphics tool that we use to calculate the irradiance at the sensor as light travels from the 3D scene, through the lens, and onto the sensor surface. We augmented the PBRT code to return multispectral images, model lens diffraction and simulate light fields [5]. To promote platform in1The three repositories are iset360, iset3d, and isetcam
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信