P. Surman, Shizheng Wang, K. S. Ong, Xiao Wei Sun, Junsong Yuan, Yuanjin Zheng
{"title":"Multi-layer light field display characterisation","authors":"P. Surman, Shizheng Wang, K. S. Ong, Xiao Wei Sun, Junsong Yuan, Yuanjin Zheng","doi":"10.1109/3DTV.2016.7548964","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548964","url":null,"abstract":"Light field 3D displays, where a hologram-like image is produced by using geometrical optical techniques as opposed to interference of light, require different measurement procedures to conventional stereoscopic displays for the measurement of certain parameters. This paper covers methods that are particularly applicable to multi-layer light field displays using two or more cascaded display layers where the images are produced by computationally-intensive algorithms. As the overall system performance is dependent on capture and computation as well as display hardware, we have developed methods that allow for all the components in the complete chain. In addition to describing these we also cover other selected measurement techniques here.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132862848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPU-based lossless volume data compression","authors":"S. Guthe, M. Goesele","doi":"10.1109/3DTV.2016.7548892","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548892","url":null,"abstract":"In rendering, textures are usually consuming more graphics memory than the geometry. This is especially true when rendering regular sampled volume data as the geometry is a single box. In addition, volume rendering suffers from the curse of dimensionality. Every time the resolution doubles, the number of projected pixels is multiplied by four but the amount of data is multiplied by eight. Data compression is thus mandatory even with the increasing amount of memory available on today's GPUs. Existing compression schemes are either lossy or do not allow on-the-fly random access to the volume data while rendering. Both of these properties are, however, important for high quality direct volume rendering. In this paper, we propose a lossless compression and caching strategy that allows random access and decompression on the GPU using a compressed volume object.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115669702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual view synthesis using RGB-D cameras","authors":"Chun-Liang Chien, Tzu-Chin Lee, H. Hang","doi":"10.1109/3DTV.2016.7548885","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548885","url":null,"abstract":"A view synthesis problem is to generate a virtual view based on the given one or multiple views and their associated depth maps. We adopt the depth image based rendering (DIBR) approach in this paper for synthesizing the new views. No explicit 3D modeling is involved. Another component of this study is the popular commodity RGB-D (color plus depth) cameras. The color and depth images captured by a pair of RGB-D cameras (Microsoft Kinect for Windows v2) are our inputs to synthesize intermediate virtual views between these two cameras. Several methods include depth to color warping, disocclusion filling, and color to color warping are adopted and designed to achieve this target. One of our major contributions is a new disocclusion detection algorithm proposed to improve the disocclusion filling result. Furthermore, an improved camera calibration method is proposed to make use of the additional depth information. Good quality synthesized views are shown at the end.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124816820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Universal blind image quality assessment for stereoscopic images","authors":"Sid Ahmed Fezza, A. Chetouani, M. Larabi","doi":"10.1109/3DTV.2016.7548888","DOIUrl":"https://doi.org/10.1109/3DTV.2016.7548888","url":null,"abstract":"Quality assessment of stereoscopic 3D images is a challenging field and represents a key factor in the success of 3D multimedia applications. Despite the important research effort in the last few years, there is no commonly accepted metric ensuring a reliable 3D quality evaluation. This statement becomes even worse when it comes to asymmetrically distorted stereoscopic content. In this paper, we propose a universal blind quality assessment metric for stereoscopic images relying on: 1) distortion type detection and 2) asymmetry nature determination. Based of the latter key information, the 3D image quality is appropriately estimated using a binocular combination strategy. Experimental results showed that the proposed metric reached a significant prediction consistency and accuracy when compared with state-of-the-art metrics.","PeriodicalId":378956,"journal":{"name":"2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130976082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}