2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)最新文献

筛选
英文 中文
Detecting walkable plane areas by using RGB-D camera and accelerometer for visually impaired people 利用RGB-D相机和加速度计检测视障人士可行走的平面区域
Kenta Imai, I. Kitahara, Y. Kameda
{"title":"Detecting walkable plane areas by using RGB-D camera and accelerometer for visually impaired people","authors":"Kenta Imai, I. Kitahara, Y. Kameda","doi":"10.1109/3DTV.2017.8280422","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280422","url":null,"abstract":"When visually impaired person has to walk out, they have to use white canes, but the range that can be scanned by a white cane is not long enough to walk safely. We propose to detect walkable plane areas on road surface by using the RGB-D camera and the accelerometer in the tablet terminal that is attached to the RGB-D camera. Our approach can detect plane areas in longer distance than a white cane. It is achieved by using height information from the ground and normal vectors of the surface calculated from a depth image obtained by the RGB-D camera in real time.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"564 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132057580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Acquisition system for dense lightfield of large scenes 大场景密集光场采集系统
M. Ziegler, Ron op het Veld, J. Keinert, Frederik Zilly
{"title":"Acquisition system for dense lightfield of large scenes","authors":"M. Ziegler, Ron op het Veld, J. Keinert, Frederik Zilly","doi":"10.1109/3DTV.2017.8280412","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280412","url":null,"abstract":"Capturing high resolution and high density lightfield is classically done using precise gantry systems and a DSLR camera. The overall baseline of available systems is small: Scene realism and change in perspective is consequentially limited. This work presents a system for acquisition of dense lightfield of large scenes using precise linear axes and a high quality camera. In contrast to former systems, our presented system can capture lightfield from natural scenes with dense sampling and significant change in perspective. Width and height of the scene can be several meters. Furthermore, for calibration of captured images, we propose a novel self-calibration method. The obtained data may serve as ground-truth reference images for evaluation of light-field reconstruction methods, novel view synthesis algorithms and many more.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115960251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Stereo camera upgraded to equal baseline multiple camera set (EBMCS) 立体摄像机升级为等基线多摄像机(EBMCS)
A. Kaczmarek
{"title":"Stereo camera upgraded to equal baseline multiple camera set (EBMCS)","authors":"A. Kaczmarek","doi":"10.1109/3DTV.2017.8280416","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280416","url":null,"abstract":"The paper presents the results of using a set of five cameras called Equal Baseline Multiple Camera Set (EBMCS) for making 3D images, disparity maps and depth maps. Cameras in the set are located in the vicinity of each other and therefore the set can be used for the purpose of stereoscopy similarly as a stereo camera. EBMCS provides disparity maps and depth maps which have a better quality than these maps obtained with the use of a stereo camera. Moreover, EBMCS has many advantages over other kinds of equipment for making 3D images such as a Time-of-flight camera (TOF), Light Detection and Ranging (LIDAR), a structured-light 3D scanner, a camera array and a camera matrix. These advantages are described in the paper. The paper also compares the performance of EBMCS to the performance of stereo cameras.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123568143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Extreme field-of-view for head-mounted displays 头戴式显示器的极端视野
I. Rakkolainen, R. Raisamo, M. Turk, Tobias Höllerer, K. Palovuori
{"title":"Extreme field-of-view for head-mounted displays","authors":"I. Rakkolainen, R. Raisamo, M. Turk, Tobias Höllerer, K. Palovuori","doi":"10.1109/3DTV.2017.8280417","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280417","url":null,"abstract":"We present novel optics and head-mounted display (HMD) prototypes, which have the widest reported field-of-view (FOV), and which can cover the full human FOV or even beyond. They are based on lenses and screens which are curved around the eyes. While this is still work-in-progress, the HMD prototypes and user tests suggest a feasible approach to significantly expand the FOV of HMDs.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133033670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Robust disparity estimation on sparse sampled light field images 稀疏采样光场图像的鲁棒视差估计
Yan Li, G. Lafruit
{"title":"Robust disparity estimation on sparse sampled light field images","authors":"Yan Li, G. Lafruit","doi":"10.1109/3DTV.2017.8280414","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280414","url":null,"abstract":"The paper presents a robust approach to compute disparities on sparse sampled light field images based on Epipolar-Plane Image (EPI) analysis. The Relative Gradient is leveraged as a kernel density function to cope with radiometric changes in non-Lambertian scenes. To account for the sparse light field, a window-based filtering is introduced to handle the noisy and homogenous regions, decomposing the scene images into edge and non-edge regions. Separate score-volume filtering over these regions avoids boundary fattening effects common to stereo matching. Finally, a consistency measure detects unreliable pixels with false disparities, to which a disparity refinement is applied. Evaluation analysis is performed on the Disney light field dataset and the proposed method shows superior results over state-of-the-art.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131685019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive filter for denoising 3D data captured by depth sensors 用于深度传感器捕获的三维数据去噪的自适应滤波器
Somar Boubou, T. Narikiyo, M. Kawanishi
{"title":"Adaptive filter for denoising 3D data captured by depth sensors","authors":"Somar Boubou, T. Narikiyo, M. Kawanishi","doi":"10.1109/3DTV.2017.8280401","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280401","url":null,"abstract":"Current consumer depth sensors produce depth maps that are often noisy and lack sufficient detail. Enhancing the quality of the 3D depth data obtained from compact depth Kinect-like sensors is an increasingly popular research area. Although depth data is known to carry a signal-dependent noise, the state-of-the-art denoising methods tend to employ denoising techniques which are independent of the depth signal itself. In this paper, we present a novel adaptive denoising filter to enhance object recognition from 3D depth data. We evaluate the performance of our proposed denoising filter against other state-of-the-art filters based on the enhancement of object recognition accuracy achieved after denoising the raw data with each filter. In order to perform object recognition from depth data, we make use of Differential Histogram of Normal Vectors (DHONV) features along with a linear SVM. Experiments show that our proposed filter outperformed the state-of-the-art de-noising methods.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125106370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Design of an annotation system for taking notes in virtual reality 虚拟现实笔记标注系统的设计
Damien Clergeaud, P. Guitton
{"title":"Design of an annotation system for taking notes in virtual reality","authors":"Damien Clergeaud, P. Guitton","doi":"10.1109/3DTV.2017.8280398","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280398","url":null,"abstract":"The industry uses immersive virtual environments for testing engineering solutions. Annotation systems allow capturing the insights that arise during those virtual reality sessions. However, those annotations remain in the virtual environment. Users are required to return to virtual reality to access it. We propose a new annotation system for VR. The design of this system contains two important aspects. First, the digital representation of the annotations enables to access the annotation in both virtual and physical world. Secondly, the interaction technique for taking notes in VR is designed to enhance the feeling of bringing the annotations from the physical world to the virtual and vice versa. We also propose the first implementation of this design.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129383037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Simulation of microlens array based plenoptic capture utilizing densely sampled light field 基于密集采样光场的微透镜阵列全光捕获模拟
U. Akpinar, E. Sahin, A. Gotchev
{"title":"Simulation of microlens array based plenoptic capture utilizing densely sampled light field","authors":"U. Akpinar, E. Sahin, A. Gotchev","doi":"10.1109/3DTV.2017.8332443","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8332443","url":null,"abstract":"Plenoptic cameras can capture the light field of a 3D scene in a single shot, which makes them attractive for several applications, such as depth estimation and refocusing. The difficulties in accurate calibration of available plenoptic camera designs, however, makes it also difficult to reliably assess such applications. This arises a need to have a ground-truth plenoptic data. We propose an accurate and efficient way to simulate the defocused plenoptic camera based on the geometric optics principles and the concept of densely sampled light field. In particular, we utilize the open-source computer graphics rendering software tool Blender and rely on a set of conventional 2D pinhole images of the scene captured from several viewpoints within the aperture of the main lens of the plenoptic camera. Elemental-image wise examination of plenoptic data and testing of post processing algorithms verifies the accuracy of the simulation.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130223093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Viewport-dependent delivery schemes for stereoscopic panoramic video 立体全景视频的视口相关传输方案
R. G. Youvalari, M. Hannuksela, A. Aminlou, M. Gabbouj
{"title":"Viewport-dependent delivery schemes for stereoscopic panoramic video","authors":"R. G. Youvalari, M. Hannuksela, A. Aminlou, M. Gabbouj","doi":"10.1109/3DTV.2017.8280404","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280404","url":null,"abstract":"Stereoscopic panoramic or omnidirectional video is a key ingredient for an immersive experience in virtual reality applications. The user views only a portion of the omnidirectional scene at each time instant, hence streaming the whole stereoscopic panoramic or omnidirectional video in high quality is not necessary and will consume an unnecessary high bandwidth usage. In order to alleviate the problem of bandwidth wastage, viewport-dependent delivery schemes have been proposed, in which the part of the captured scene that is within the viewer's field of view is delivered at highest quality while the rest of the scene in lower quality. The low quality content is visible only after fast head movements for a short period, until the next periodic intra-coded picture that can be used for switching viewpoints is available. This paper proposes viewport-dependent delivery schemes for streaming of stereoscopic panoramic or omnidirectional video by using region-of-interest coding methods of MV-HEVC and SHVC standards. The proposed schemes avoid the need for frequent intra-coded pictures, and consequently in the performed experiments the streaming bitrate is reduced by more than 50% on average for the best schemes compared to a simulcast delivery method.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124804909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The use of advanced imaging technology in welfare technology solutions — Some ethical aspects 在福利技术解决方案中使用先进的影像技术-一些道德方面的问题
Kari K. Lilja, J. Palomäki
{"title":"The use of advanced imaging technology in welfare technology solutions — Some ethical aspects","authors":"Kari K. Lilja, J. Palomäki","doi":"10.1109/3DTV.2017.8280396","DOIUrl":"https://doi.org/10.1109/3DTV.2017.8280396","url":null,"abstract":"Advanced imaging technology with properties like a more realistic picture with extremely high resolution and new applications and branches like welfare technology where these properties are used also involves certain ethical challenges. The protection of vulnerable patients and the privacy of employees and third parties have not yet been discussed to any great extent but should be taken into account in designing, manufacturing and implementing the applications.","PeriodicalId":279013,"journal":{"name":"2017 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信