{"title":"Detecting walkable plane areas by using RGB-D camera and accelerometer for visually impaired people","authors":"Kenta Imai, I. Kitahara, Y. Kameda","doi":"10.1109/3DTV.2017.8280422","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280422","url":null,"abstract":"When visually impaired person has to walk out, they have to use white canes, but the range that can be scanned by a white cane is not long enough to walk safely. We propose to detect walkable plane areas on road surface by using the RGB-D camera and the accelerometer in the tablet terminal that is attached to the RGB-D camera. Our approach can detect plane areas in longer distance than a white cane. It is achieved by using height information from the ground and normal vectors of the surface calculated from a depth image obtained by the RGB-D camera in real time.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"564 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132057580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ziegler, Ron op het Veld, J. Keinert, Frederik Zilly
{"title":"Acquisition system for dense lightfield of large scenes","authors":"M. Ziegler, Ron op het Veld, J. Keinert, Frederik Zilly","doi":"10.1109/3DTV.2017.8280412","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280412","url":null,"abstract":"Capturing high resolution and high density lightfield is classically done using precise gantry systems and a DSLR camera. The overall baseline of available systems is small: Scene realism and change in perspective is consequentially limited. This work presents a system for acquisition of dense lightfield of large scenes using precise linear axes and a high quality camera. In contrast to former systems, our presented system can capture lightfield from natural scenes with dense sampling and significant change in perspective. Width and height of the scene can be several meters. Furthermore, for calibration of captured images, we propose a novel self-calibration method. The obtained data may serve as ground-truth reference images for evaluation of light-field reconstruction methods, novel view synthesis algorithms and many more.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115960251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stereo camera upgraded to equal baseline multiple camera set (EBMCS)","authors":"A. Kaczmarek","doi":"10.1109/3DTV.2017.8280416","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280416","url":null,"abstract":"The paper presents the results of using a set of five cameras called Equal Baseline Multiple Camera Set (EBMCS) for making 3D images, disparity maps and depth maps. Cameras in the set are located in the vicinity of each other and therefore the set can be used for the purpose of stereoscopy similarly as a stereo camera. EBMCS provides disparity maps and depth maps which have a better quality than these maps obtained with the use of a stereo camera. Moreover, EBMCS has many advantages over other kinds of equipment for making 3D images such as a Time-of-flight camera (TOF), Light Detection and Ranging (LIDAR), a structured-light 3D scanner, a camera array and a camera matrix. These advantages are described in the paper. The paper also compares the performance of EBMCS to the performance of stereo cameras.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123568143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Rakkolainen, R. Raisamo, M. Turk, Tobias Höllerer, K. Palovuori
{"title":"Extreme field-of-view for head-mounted displays","authors":"I. Rakkolainen, R. Raisamo, M. Turk, Tobias Höllerer, K. Palovuori","doi":"10.1109/3DTV.2017.8280417","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280417","url":null,"abstract":"We present novel optics and head-mounted display (HMD) prototypes, which have the widest reported field-of-view (FOV), and which can cover the full human FOV or even beyond. They are based on lenses and screens which are curved around the eyes. While this is still work-in-progress, the HMD prototypes and user tests suggest a feasible approach to significantly expand the FOV of HMDs.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133033670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust disparity estimation on sparse sampled light field images","authors":"Yan Li, G. Lafruit","doi":"10.1109/3DTV.2017.8280414","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280414","url":null,"abstract":"The paper presents a robust approach to compute disparities on sparse sampled light field images based on Epipolar-Plane Image (EPI) analysis. The Relative Gradient is leveraged as a kernel density function to cope with radiometric changes in non-Lambertian scenes. To account for the sparse light field, a window-based filtering is introduced to handle the noisy and homogenous regions, decomposing the scene images into edge and non-edge regions. Separate score-volume filtering over these regions avoids boundary fattening effects common to stereo matching. Finally, a consistency measure detects unreliable pixels with false disparities, to which a disparity refinement is applied. Evaluation analysis is performed on the Disney light field dataset and the proposed method shows superior results over state-of-the-art.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131685019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive filter for denoising 3D data captured by depth sensors","authors":"Somar Boubou, T. Narikiyo, M. Kawanishi","doi":"10.1109/3DTV.2017.8280401","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280401","url":null,"abstract":"Current consumer depth sensors produce depth maps that are often noisy and lack sufficient detail. Enhancing the quality of the 3D depth data obtained from compact depth Kinect-like sensors is an increasingly popular research area. Although depth data is known to carry a signal-dependent noise, the state-of-the-art denoising methods tend to employ denoising techniques which are independent of the depth signal itself. In this paper, we present a novel adaptive denoising filter to enhance object recognition from 3D depth data. We evaluate the performance of our proposed denoising filter against other state-of-the-art filters based on the enhancement of object recognition accuracy achieved after denoising the raw data with each filter. In order to perform object recognition from depth data, we make use of Differential Histogram of Normal Vectors (DHONV) features along with a linear SVM. Experiments show that our proposed filter outperformed the state-of-the-art de-noising methods.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125106370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of an annotation system for taking notes in virtual reality","authors":"Damien Clergeaud, P. Guitton","doi":"10.1109/3DTV.2017.8280398","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280398","url":null,"abstract":"The industry uses immersive virtual environments for testing engineering solutions. Annotation systems allow capturing the insights that arise during those virtual reality sessions. However, those annotations remain in the virtual environment. Users are required to return to virtual reality to access it. We propose a new annotation system for VR. The design of this system contains two important aspects. First, the digital representation of the annotations enables to access the annotation in both virtual and physical world. Secondly, the interaction technique for taking notes in VR is designed to enhance the feeling of bringing the annotations from the physical world to the virtual and vice versa. We also propose the first implementation of this design.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129383037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation of microlens array based plenoptic capture utilizing densely sampled light field","authors":"U. Akpinar, E. Sahin, A. Gotchev","doi":"10.1109/3DTV.2017.8332443","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8332443","url":null,"abstract":"Plenoptic cameras can capture the light field of a 3D scene in a single shot, which makes them attractive for several applications, such as depth estimation and refocusing. The difficulties in accurate calibration of available plenoptic camera designs, however, makes it also difficult to reliably assess such applications. This arises a need to have a ground-truth plenoptic data. We propose an accurate and efficient way to simulate the defocused plenoptic camera based on the geometric optics principles and the concept of densely sampled light field. In particular, we utilize the open-source computer graphics rendering software tool Blender and rely on a set of conventional 2D pinhole images of the scene captured from several viewpoints within the aperture of the main lens of the plenoptic camera. Elemental-image wise examination of plenoptic data and testing of post processing algorithms verifies the accuracy of the simulation.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130223093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. G. Youvalari, M. Hannuksela, A. Aminlou, M. Gabbouj
{"title":"Viewport-dependent delivery schemes for stereoscopic panoramic video","authors":"R. G. Youvalari, M. Hannuksela, A. Aminlou, M. Gabbouj","doi":"10.1109/3DTV.2017.8280404","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280404","url":null,"abstract":"Stereoscopic panoramic or omnidirectional video is a key ingredient for an immersive experience in virtual reality applications. The user views only a portion of the omnidirectional scene at each time instant, hence streaming the whole stereoscopic panoramic or omnidirectional video in high quality is not necessary and will consume an unnecessary high bandwidth usage. In order to alleviate the problem of bandwidth wastage, viewport-dependent delivery schemes have been proposed, in which the part of the captured scene that is within the viewer's field of view is delivered at highest quality while the rest of the scene in lower quality. The low quality content is visible only after fast head movements for a short period, until the next periodic intra-coded picture that can be used for switching viewpoints is available. This paper proposes viewport-dependent delivery schemes for streaming of stereoscopic panoramic or omnidirectional video by using region-of-interest coding methods of MV-HEVC and SHVC standards. The proposed schemes avoid the need for frequent intra-coded pictures, and consequently in the performed experiments the streaming bitrate is reduced by more than 50% on average for the best schemes compared to a simulcast delivery method.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124804909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The use of advanced imaging technology in welfare technology solutions — Some ethical aspects","authors":"Kari K. Lilja, J. Palomäki","doi":"10.1109/3DTV.2017.8280396","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280396","url":null,"abstract":"Advanced imaging technology with properties like a more realistic picture with extremely high resolution and new applications and branches like welfare technology where these properties are used also involves certain ethical challenges. The protection of vulnerable patients and the privacy of employees and third parties have not yet been discussed to any great extent but should be taken into account in designing, manufacturing and implementing the applications.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}