2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)最新文献

筛选
英文 中文
DEPTH ESTIMATION IN LIGHT FIELD CAMERA ARRAYS BASED ON MULTI-STEREO MATCHING AND BELIEF PROPAGATION 基于多立体匹配和信念传播的光场相机阵列深度估计
Ségolène Rogge, A. Munteanu
{"title":"DEPTH ESTIMATION IN LIGHT FIELD CAMERA ARRAYS BASED ON MULTI-STEREO MATCHING AND BELIEF PROPAGATION","authors":"Ségolène Rogge, A. Munteanu","doi":"10.1109/3DTV.2018.8478503","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478503","url":null,"abstract":"Despite of the rich variety of depth estimation methods in the literature, computing accurate depth in multi-view camera systems remains a difficult computer vision problem. The paper proposes a novel depth estimation method for light field camera arrays. This work goes beyond existing depth estimation methods for light field cameras, being the first to employ an array of such cameras. The proposed method makes use of a multi-window and multi-scale stereo matching algorithm combined with global energy minimization based on belief propagation. The stereo-pair results are merged based on k-means clustering. The experiments demonstrate systematically improved depth estimation performance compared to the use of singular light field cameras. Additionally, the quality of the depth estimates is quasi constant at any location between the cameras, which holds great promise for the development of free navigation applications in the near future.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115946864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
LOCAL METHOD OF COLOR-DIFFERENCE CORRECTION BETWEEN STEREOSCOPIC-VIDEO VIEWS 立体视频视图之间色差校正的局部方法
S. Lavrushkin, Vitaliy Lyudvichenko, D. Vatolin
{"title":"LOCAL METHOD OF COLOR-DIFFERENCE CORRECTION BETWEEN STEREOSCOPIC-VIDEO VIEWS","authors":"S. Lavrushkin, Vitaliy Lyudvichenko, D. Vatolin","doi":"10.1109/3DTV.2018.8478453","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478453","url":null,"abstract":"Many factors can cause color distortions between stereoscopic views during 3D-video shooting. Numerous viewers experience discomfort and headaches when watching stereoscopic videos that contain such distortions. In addition, 3D videos with color differences are hard to process because many algorithms assume brightness constancy.We propose an automatic method for correcting color distortions between stereoscopic views and compare it with analogs. The comparison shows that our proposed method combines high color-correction accuracy with relatively low computational complexity.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"66 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114059602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
LATEST RESEARCH AT THE ADVANCED DISPLAYS LABORATORY AT NTU 南洋理工大学先进显示器实验室的最新研究
P. Surman, X. Zhang, Weitao Song, Xinxing Xia, Shizheng Wang, Yuanjin Zheng
{"title":"LATEST RESEARCH AT THE ADVANCED DISPLAYS LABORATORY AT NTU","authors":"P. Surman, X. Zhang, Weitao Song, Xinxing Xia, Shizheng Wang, Yuanjin Zheng","doi":"10.1109/3DTV.2018.8478440","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478440","url":null,"abstract":"There are many basic ways of providing a glasses-free 3D display and the three methods considered most likely to succeed commercially were chosen for our current research, these are; multi-layer light field, head tracked and super multiview displays. Our multi-layer light field display enables a far smaller form factor than other types, and faster algorithms along with horizontal parallax-only will considerably speed-up computation time. A spin-off of this technology is a near-eye display that provides focus cues for maximizing user comfort. Head tracked displays use liquid crystal display panels illuminated with a directional backlight to produce multiple sets of exit pupil pairs that follow the user’s eyes under the control of a head position tracker. Our super multiview display (SMV) system uses high frame-rate projectors for spatio-temporal multiplexing that give dense viewing zones with no accommodation/convergence (A/C) conflict. Bandwidth reduction is achieved by discarding redundant information at capture. The status of the latest prototypes and their performance is described; and we conclude by indicating the future directions of our research.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116501910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D OBJECTIVE QUALITY ASSESSMENT OF LIGHT FIELD VIDEO FRAMES 光场视频帧的三维客观质量评价
R. R. Tamboli, P. A. Kara, A. Cserkaszky, A. Barsi, M. Martini, Balasubramanyam Appina, Sumohana S. Channappayya, S. Jana
{"title":"3D OBJECTIVE QUALITY ASSESSMENT OF LIGHT FIELD VIDEO FRAMES","authors":"R. R. Tamboli, P. A. Kara, A. Cserkaszky, A. Barsi, M. Martini, Balasubramanyam Appina, Sumohana S. Channappayya, S. Jana","doi":"10.1109/3DTV.2018.8478557","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478557","url":null,"abstract":"With the rapid advances in light field displays and cameras, research in light field content creation, visualization, coding and quality assessment is now beyond a state of emergence; it has already emerged and started attracting a significant part of the scientific community. The capability of light field displays to offer glasses-free 3D experience simultaneously for multiple users has opened new avenues in subjective and objective quality assessment of light field image content, and video is also becoming research target of such quality evaluation methods. Yet it needs to be stated that while static light field contents have evidently received relatively more attention, the research on light field video content still remains largely unexplored. In this paper, we present results of the objective quality assessment of key frames extracted from light field video content. To this end, we use our own full-reference 3D objective quality metric.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
[Copyright notice] (版权)
{"title":"[Copyright notice]","authors":"","doi":"10.1109/3dtv.2018.8478547","DOIUrl":"https://doi.org/10.1109/3dtv.2018.8478547","url":null,"abstract":"","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131448211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SINGLE-SHOT DENSE RECONSTRUCTION WITH EPIC-FLOW 单次密集重建与史诗流
Qiao Chen, Charalambos (Charis) Poullis
{"title":"SINGLE-SHOT DENSE RECONSTRUCTION WITH EPIC-FLOW","authors":"Qiao Chen, Charalambos (Charis) Poullis","doi":"10.1109/3DTV.2018.8478620","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478620","url":null,"abstract":"In this paper we present a novel method for generating dense reconstructions by applying only structure-from-motion(SfM) on large-scale datasets without the need for multi-view stereo as a post-processing step. A state-of-the-art optical flow technique is used to generate dense matches. The matches are encoded such that verification for correctness becomes possible, and are stored in a database on-disk. The use of this out-of-core approach transfers the requirement for large memory space to disk, therefore allowing for the processing of even larger-scale datasets than before. We compare our approach with the state-of-the-art and present the results which verify our claims.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122421258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIMULATION OF PLENOPTIC CAMERAS 全光学相机的模拟
Tim Michels, Arne Petersen, L. Palmieri, R. Koch
{"title":"SIMULATION OF PLENOPTIC CAMERAS","authors":"Tim Michels, Arne Petersen, L. Palmieri, R. Koch","doi":"10.1109/3DTV.2018.8478432","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478432","url":null,"abstract":"Plenoptic cameras enable the capturing of spatial as well as angular color information which can be used for various applications among which are image refocusing and depth calculations. However, these cameras are expensive and research in this area currently lacks data for ground truth comparisons. In this work we describe a flexible, easy-to-use Blender model for the different plenoptic camera types which is on the one hand able to provide the ground truth data for research and on the other hand allows an inexpensive assessment of the cameras usefulness for the desired applications. Furthermore we show that the rendering results exhibit the same image degradation effects as real cameras and make our simulation publicly available.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121223123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
MATCHING LIGHT FIELD DATASETS FROM PLENOPTIC CAMERAS 1.0 AND 2.0 匹配全光相机1.0和2.0的光场数据集
Waqas Ahmad, L. Palmieri, R. Koch, Mårten Sjöström
{"title":"MATCHING LIGHT FIELD DATASETS FROM PLENOPTIC CAMERAS 1.0 AND 2.0","authors":"Waqas Ahmad, L. Palmieri, R. Koch, Mårten Sjöström","doi":"10.1109/3DTV.2018.8478611","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478611","url":null,"abstract":"The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical designs of a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0 (Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132787793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
AN ANALYSIS OF DEMOSAICING FOR PLENOPTIC CAPTURE BASED ON RAY OPTICS 基于射线光学的全光学捕获反马赛克分析
Yongwei Li, R. Olsson, Mårten Sjöström
{"title":"AN ANALYSIS OF DEMOSAICING FOR PLENOPTIC CAPTURE BASED ON RAY OPTICS","authors":"Yongwei Li, R. Olsson, Mårten Sjöström","doi":"10.1109/3DTV.2018.8478476","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478476","url":null,"abstract":"The plenoptic camera is gaining more and more attention as it captures the 4D light field of a scene with a single shot and enables a wide range of post-processing applications. However, the pre-processing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125206899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ANALYSIS OF ACCOMMODATION CUES IN HOLOGRAPHIC STEREOGRAMS 全息立体图中调节线索的分析
Jani Mäkinen, E. Sahin, A. Gotchev
{"title":"ANALYSIS OF ACCOMMODATION CUES IN HOLOGRAPHIC STEREOGRAMS","authors":"Jani Mäkinen, E. Sahin, A. Gotchev","doi":"10.1109/3DTV.2018.8478586","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478586","url":null,"abstract":"The simplicity of the holographic stereogram (HS) makes it an attractive option in comparison to the more complex coherent computer generated hologram (CGH) methods. The cost of its simplicity is that the HS cannot accurately reconstruct deep scenes due to the lack of correct accommodation cues. The exact nature of the accommodation cues present in HSs, however, has not been investigated. In this paper, we provide analysis of the relation between the hologram sampling properties and the perceived accommodation response. The HS can be considered as a generator of a discrete light field (LF) and can thus be examined by considering the light ray oriented nature of the hologram diffracted light. We further support the analysis by employing a numerical reconstruction tool simulating the viewing process of the human eye. The simulation results demonstrate that HSs can provide accommodation cues depending on the choice of hologram segmentation size. It is further demonstrated that the accommodation response can be enhanced at the expense of loss in perceived spatial resolution.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129900192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信