Yousif Ali Hassan Najim, G. Triantafyllidis, G. Palamas
{"title":"DYNAMIC FRACTURING OF 3D MODELS FOR REAL TIME COMPUTER GRAPHICS","authors":"Yousif Ali Hassan Najim, G. Triantafyllidis, G. Palamas","doi":"10.1109/3DTV.2018.8478546","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478546","url":null,"abstract":"This work proposes a method of fracturing one-sided 3D objects, in real time, using standard GPU shaders. Existing implementations include either pre-fracturing objects and replacing them at run-time, or precomputing the fracture patterns and using them to fracture the objects depending on user interaction. In this article we describe a novel method in which the fracturing calculations are handled by the GPU and only having the initial positions of the fracture fields handled by the CPU. To obtain higher resolutions of fractures, scalable tessellation is also implemented. As a result, this method allows for fast fracturing that could be utilized in real-time applications such as videogames.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128504472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VIEWING SIMULATION OF INTEGRAL IMAGING DISPLAY BASED ON WAVE OPTICS","authors":"U. Akpinar, E. Sahin, A. Gotchev","doi":"10.1109/3DTV.2018.8478568","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478568","url":null,"abstract":"We present an accurate model of integral imaging display based on wave optics. The model enables accurate characterization of the display through simulated perceived images by the human visual system. Thus, it is useful to investigate the capabilities of the display in terms of various quality factors such as depth of field and resolution, as well as delivering visual cues such as focus. Furthermore, due to the adopted wave optics formalism, simulation and analysis of more advanced techniques such as wavefront coding for increased depth of field are also possible.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127876886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ADAPTIVE COLOR CORRECTION IN VIRTUAL VIEW SYNTHESIS","authors":"A. Dziembowski, M. Domański","doi":"10.1109/3DTV.2018.8478439","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478439","url":null,"abstract":"In the paper an adaptive color correction method for virtual view synthesis is presented. It deals with the typical problem in free navigation systems – different illumination in views captured by different cameras acquiring the scene. The proposed technique adjusts the local color characteristics of objects visible in two real views. That approach allows to significantly reduce number and visibility of color artifacts in the virtual view. Proposed method was tested on 12 multiview test sequences. Obtained and presented in the paper results show, that proposed color correction provides increase of the virtual view quality measured by PSNR, SSIM and subjective evaluation.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125489980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elijs Dima, Mårten Sjöström, R. Olsson, Martin Kjellqvist, Lukasz Litwic, Zhi Zhang, Lennart Rasmusson, Lars Flodén
{"title":"LIFE: A FLEXIBLE TESTBED FOR LIGHT FIELD EVALUATION","authors":"Elijs Dima, Mårten Sjöström, R. Olsson, Martin Kjellqvist, Lukasz Litwic, Zhi Zhang, Lennart Rasmusson, Lars Flodén","doi":"10.1109/3DTV.2018.8478550","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478550","url":null,"abstract":"Recording and imaging the 3D world has led to the use of light fields. Capturing, distributing and presenting light field data is challenging, and requires an evaluation platform. We define a framework for real-time processing, and present the design and implementation of a light field evaluation system. In order to serve as a testbed, the system is designed to be flexible, scalable, and able to model various end-to-end light field systems. This flexibility is achieved by encapsulating processes and devices in discrete framework systems. The modular capture system supports multiple camera types, general-purpose data processing, and streaming to network interfaces. The cloud system allows for parallel transcoding and distribution of streams. The presentation system encapsulates rendering and display specifics. The real-time ability was tested in a latency measurement; the capture and presentation systems process and stream frames within a 40 ms limit.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130302216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CHANNEL-MISMATCH DETECTION ALGORITHM FOR STEREOSCOPIC VIDEO USING CONVOLUTIONAL NEURAL NETWORK","authors":"S. Lavrushkin, D. Vatolin","doi":"10.1109/3DTV.2018.8478542","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478542","url":null,"abstract":"Channel mismatch (the result of swapping left and right views) is a 3D-video artifact that can cause major viewer discomfort. This work presents a novel high-accuracy method of channel-mismatch detection. In addition to the features described in our previous work, we introduce a new feature based on a convolutional neural network; it predicts channel-mismatch probability on the basis of the stereoscopic views and corresponding disparity maps. A logistic-regression model trained on the described features makes the final prediction. We tested this model on a set of 900 stereoscopic-video scenes, and it outperformed existing channel-mismatch detection methods that previously served in analyses of full-length stereoscopic movies.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121355686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Fachada, Daniele Bonatto, Arnaud Schenkel, G. Lafruit
{"title":"DEPTH IMAGE BASED VIEW SYNTHESIS WITH MULTIPLE REFERENCE VIEWS FOR VIRTUAL REALITY","authors":"Sarah Fachada, Daniele Bonatto, Arnaud Schenkel, G. Lafruit","doi":"10.1109/3DTV.2018.8478484","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478484","url":null,"abstract":"This paper presents a method for view synthesis from multiple views and their depth maps for free navigation in Virtual Reality with six degrees of freedom (6DoF) and 360 video (3DoF+), including synthesizing views corresponding to stepping in or out of the scene. Such scenarios should support large baseline view synthesis, typically going beyond the view synthesis involved in light field displays [1]. Our method allows to input an unlimited number of reference views, instead of the usual left and right reference views. Increasing the number of reference views overcomes problems such as occlusions, tangential surfaces to the cameras axis and artifacts in low quality depth maps. We outperform MPEG’s reference software, VSRS [2], with a gain of up to 2.5 dB in PSNR when using four reference views.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114837053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A NOVEL DISPARITY-ASSISTED BLOCK MATCHING-BASED APPROACH FOR SUPER-RESOLUTION OF LIGHT FIELD IMAGES","authors":"S. Farag, V. Velisavljevic","doi":"10.1109/3DTV.2018.8478627","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478627","url":null,"abstract":"Currently, available plenoptic imaging technology has limited resolution. That makes it challenging to use this technology in applications, where sharpness is essential, such as film industry. Previous attempts aimed at enhancing the spatial resolution of plenoptic light field (LF) images were based on block and patch matching inherited from classical image super-resolution, where multiple views were considered as separate frames. By contrast to these approaches, a novel super-resolution technique is proposed in this paper with a focus on exploiting estimated disparity information to reduce the matching area in the super-resolution process. We estimate the disparity information from the interpolated LR view point images (VPs). We denote our method as light field block matching super-resolution. We additionally combine our novel super-resolution method with directionally adaptive image interpolation from [1] to preserve sharpness of the high-resolution images. We prove a steady gain in the PSNR and SSIM quality of the super-resolved images for the resolution enhancement factor 8×8 as compared to the recent approaches and also to our previous work [2].","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129548571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}