Plenoptic相机

Bastian Goldlücke, Oliver Klehm, S. Wanner, E. Eisemann
{"title":"Plenoptic相机","authors":"Bastian Goldlücke, Oliver Klehm, S. Wanner, E. Eisemann","doi":"10.1201/b18154-7","DOIUrl":null,"url":null,"abstract":"The light field, as defined by Gershun in 1936 [Gershun 36] describes the radiance traveling in every direction through every point in space. Mathematically, it can be described by a 5D function which is called the plenoptic function, in more generality sometimes given with the two additional dimensions time and wavelength. Outside a scene, in the absence of occluders, however, light intensity does not change while traveling along a ray. Thus, the light field of a scene can be parameterized over a surrounding surface; light intensity is attributed to every ray passing through the surface into any direction. This yields the common definition of the light field as a 4D function. In contrast, a single pinhole view of the scene only captures the rays passing through the center of projection, corresponding to a single 2D cut through the light field. Fortunately, camera sensors have made tremendous progress and nowadays offer extremely high resolutions. For many visual-computing applications, however, spatial resolution is already more than sufficient, while robustness of the results is what really matters. Computational photography explores methods to use the extra resolution in different ways. In particular, it is possible to capture several views of a scene from slightly different directions on a single sensor and thus offer single-shot 4D light field capture. Technically, this capture can be realized by a so-called plenoptic camera, which uses an array of microlenses mounted in front of the sensor [Ng 06]. This type of camera offers interesting opportunities for the design of visual computing algorithms, and it has been predicted that it will play an important role in the consumer market of the future [Levoy 06]. The dense sampling of the light field with view points lying closely together may also offer new insights and opportunities to perform 3D reconstruction. Light fields have thus attracted quite a lot of interest in the computer vision community. In particular, there are indications that small changes in view point, are important for visual understanding. For example, it has been shown that even minuscule changes at occlusion boundaries from view point shifts give a powerful perceptional cue for depth [Rucci 08].","PeriodicalId":141890,"journal":{"name":"Digital Representations of the Real World","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Plenoptic Cameras\",\"authors\":\"Bastian Goldlücke, Oliver Klehm, S. Wanner, E. Eisemann\",\"doi\":\"10.1201/b18154-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The light field, as defined by Gershun in 1936 [Gershun 36] describes the radiance traveling in every direction through every point in space. Mathematically, it can be described by a 5D function which is called the plenoptic function, in more generality sometimes given with the two additional dimensions time and wavelength. Outside a scene, in the absence of occluders, however, light intensity does not change while traveling along a ray. Thus, the light field of a scene can be parameterized over a surrounding surface; light intensity is attributed to every ray passing through the surface into any direction. This yields the common definition of the light field as a 4D function. In contrast, a single pinhole view of the scene only captures the rays passing through the center of projection, corresponding to a single 2D cut through the light field. Fortunately, camera sensors have made tremendous progress and nowadays offer extremely high resolutions. For many visual-computing applications, however, spatial resolution is already more than sufficient, while robustness of the results is what really matters. Computational photography explores methods to use the extra resolution in different ways. In particular, it is possible to capture several views of a scene from slightly different directions on a single sensor and thus offer single-shot 4D light field capture. Technically, this capture can be realized by a so-called plenoptic camera, which uses an array of microlenses mounted in front of the sensor [Ng 06]. This type of camera offers interesting opportunities for the design of visual computing algorithms, and it has been predicted that it will play an important role in the consumer market of the future [Levoy 06]. The dense sampling of the light field with view points lying closely together may also offer new insights and opportunities to perform 3D reconstruction. Light fields have thus attracted quite a lot of interest in the computer vision community. In particular, there are indications that small changes in view point, are important for visual understanding. For example, it has been shown that even minuscule changes at occlusion boundaries from view point shifts give a powerful perceptional cue for depth [Rucci 08].\",\"PeriodicalId\":141890,\"journal\":{\"name\":\"Digital Representations of the Real World\",\"volume\":\"76 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Representations of the Real World\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1201/b18154-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Representations of the Real World","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/b18154-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

Gershun在1936年定义的光场[Gershun 36]描述了通过空间中每个点向各个方向传播的辐射。数学上,它可以用一个5D函数来描述,这个函数被称为全光函数,在更一般的情况下,有时会加上两个额外的维度时间和波长。然而,在场景之外,在没有遮挡物的情况下,光强度在沿着光线行进时不会改变。因此,场景的光场可以在周围表面上参数化;光的强度归因于穿过表面向任何方向的每条射线。这就产生了光场作为四维函数的一般定义。相比之下,单个针孔视图只捕获穿过投影中心的光线,对应于光场的单个2D切割。幸运的是,相机传感器已经取得了巨大的进步,现在可以提供极高的分辨率。然而,对于许多视觉计算应用来说,空间分辨率已经绰绰有余,而结果的鲁棒性才是真正重要的。计算摄影探索了以不同方式使用额外分辨率的方法。特别是,可以在单个传感器上从稍微不同的方向捕获场景的多个视图,从而提供单镜头4D光场捕获。从技术上讲,这种捕捉可以通过所谓的全光学相机来实现,全光学相机使用安装在传感器前面的微透镜阵列[Ng 06]。这种类型的相机为视觉计算算法的设计提供了有趣的机会,并且预测它将在未来的消费市场中发挥重要作用[Levoy 06]。视点紧密相连的光场密集采样也可能为执行3D重建提供新的见解和机会。因此,光场在计算机视觉社区中引起了相当多的兴趣。特别是,有迹象表明,视点的微小变化对视觉理解很重要。例如,研究表明,即使是视点移动引起的遮挡边界的微小变化也会给深度提供强大的感知线索[Rucci 08]。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Plenoptic Cameras
The light field, as defined by Gershun in 1936 [Gershun 36] describes the radiance traveling in every direction through every point in space. Mathematically, it can be described by a 5D function which is called the plenoptic function, in more generality sometimes given with the two additional dimensions time and wavelength. Outside a scene, in the absence of occluders, however, light intensity does not change while traveling along a ray. Thus, the light field of a scene can be parameterized over a surrounding surface; light intensity is attributed to every ray passing through the surface into any direction. This yields the common definition of the light field as a 4D function. In contrast, a single pinhole view of the scene only captures the rays passing through the center of projection, corresponding to a single 2D cut through the light field. Fortunately, camera sensors have made tremendous progress and nowadays offer extremely high resolutions. For many visual-computing applications, however, spatial resolution is already more than sufficient, while robustness of the results is what really matters. Computational photography explores methods to use the extra resolution in different ways. In particular, it is possible to capture several views of a scene from slightly different directions on a single sensor and thus offer single-shot 4D light field capture. Technically, this capture can be realized by a so-called plenoptic camera, which uses an array of microlenses mounted in front of the sensor [Ng 06]. This type of camera offers interesting opportunities for the design of visual computing algorithms, and it has been predicted that it will play an important role in the consumer market of the future [Levoy 06]. The dense sampling of the light field with view points lying closely together may also offer new insights and opportunities to perform 3D reconstruction. Light fields have thus attracted quite a lot of interest in the computer vision community. In particular, there are indications that small changes in view point, are important for visual understanding. For example, it has been shown that even minuscule changes at occlusion boundaries from view point shifts give a powerful perceptional cue for depth [Rucci 08].
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信