2021 International Conference on 3D Immersion (IC3D)最新文献

筛选
英文 中文
A Novel Compression Scheme Based on Hybrid Tucker-Vector Quantization Via Tensor Sketching for Dynamic Light Fields Acquired Through Coded Aperture Camera 一种基于张量绘制混合矢量量化的编码孔径相机动态光场压缩新方案
2021 International Conference on 3D Immersion (IC3D) Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687155
Joshitha Ravishankar, Mansi Sharma, Sally Khaidem
{"title":"A Novel Compression Scheme Based on Hybrid Tucker-Vector Quantization Via Tensor Sketching for Dynamic Light Fields Acquired Through Coded Aperture Camera","authors":"Joshitha Ravishankar, Mansi Sharma, Sally Khaidem","doi":"10.1109/IC3D53758.2021.9687155","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687155","url":null,"abstract":"Emerging computational light field displays are a suitable choice for realistic presentation of 3D scenes on autostereoscopic glasses-free platforms. However, the enormous size of light field limits their utilization for streaming and 3D display applications. In this paper, we propose a novel representation, coding and streaming scheme for dynamic light fields based on a novel Hybrid Tucker TensorSketch Vector Quantization (HTTSVQ) algorithm. A dynamic light field can be generated from a static light field to capture a moving 3D scene. We acquire images through different coded aperture patterns for a dynamic light field and perform their low-rank approximation using our HTTSVQ scheme, followed by encoding with High Efficiency Video Coding (HEVC). The proposed single pass coding scheme can incrementally handle tensor elements and thus enables to stream and compress light field data without the need to store it in full. Additional encoding of low-rank approximated acquired images by HEVC eliminates intra-frame, inter-frame and intrinsic redundancies in light field data. Comparison with state-of-the-art coders HEVC and its multi-view extension (MV-HEVC) exhibits superior compression performance of the proposed scheme for real-world light fields.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133077610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Perceptually-Supported and the Subjectively-Preferred Viewing Distance of Projection-Based Light Field Displays 基于投影的光场显示的感知支持和主观偏好的观看距离
2021 International Conference on 3D Immersion (IC3D) Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687222
P. A. Kara, Mary Guindy, T. Balogh, Anikó Simon
{"title":"The Perceptually-Supported and the Subjectively-Preferred Viewing Distance of Projection-Based Light Field Displays","authors":"P. A. Kara, Mary Guindy, T. Balogh, Anikó Simon","doi":"10.1109/IC3D53758.2021.9687222","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687222","url":null,"abstract":"As the research efforts and development processes behind light field visualization technologies advance, potential novel use cases emerge. These contexts of light field display utilization fundamentally depend on the distance of observation, due to the sheer technological nature of such glasses-free 3D systems. Yet, at the time of this paper, the number of works in the scientific literature that address viewing distance is rather limited, focusing solely on 3D visual experience based on angular density. Thus far, the personal preference of observers regarding viewing distance has not been considered by studies. Furthermore, the upcoming standardization efforts also necessitate research on the topic in order to coherently unify the methodologies of subjective tests. In this paper, we investigate the perceptually-supported and the subjectively-preferred viewing distance of light field visualization. We carried out a series of tests on multiple projection-based light field displays to study these distances, with the separate involvement of experts and regular test participants.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131164471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adaptive Streaming and Rendering of Static Light Fields in the Web Browser Web浏览器中静态光场的自适应流和渲染
2021 International Conference on 3D Immersion (IC3D) Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687239
Hendrik Lievens, Maarten Wijnants, Brent Zoomers, J. Put, Nick Michiels, P. Quax, W. Lamotte
{"title":"Adaptive Streaming and Rendering of Static Light Fields in the Web Browser","authors":"Hendrik Lievens, Maarten Wijnants, Brent Zoomers, J. Put, Nick Michiels, P. Quax, W. Lamotte","doi":"10.1109/IC3D53758.2021.9687239","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687239","url":null,"abstract":"Static light fields are an image-based technology that allow for the photorealistic representation of inanimate objects and scenes in virtual environments. As such, static light fields have application opportunities in heterogeneous domains, including education, cultural heritage and entertainment. This paper contributes the design, implementation and performance evaluation of a web-based static light field consumption system. The proposed system allows static light field datasets to be adaptively streamed over the network and then to be visualized in a vanilla web browser. The performance evaluation results prove that real-time consumption of static light fields at AR/VR-compatible framerates of 90 FPS or more is feasible on commercial off-the-shelf hardware. Given the ubiquitous availability of web browsers on modern consumption devices (PCs, smart TVs, Head Mounted Displays, . . . ), our work is intended to significantly improve the accessibility and exploitation of static light field technology. The JavaScript client code is open-sourced to maximize our work’s impact.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130648272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Implementation of Multi-Focal Near-Eye Display Architecture: Optimization of Data Path 多焦点近眼显示架构的实现:数据路径的优化
2021 International Conference on 3D Immersion (IC3D) Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687169
R. Ruskuls, K. Slics, Sandra Balode, Reinis Ozolins, E. Linina, K. Osmanis, I. Osmanis
{"title":"Implementation of Multi-Focal Near-Eye Display Architecture: Optimization of Data Path","authors":"R. Ruskuls, K. Slics, Sandra Balode, Reinis Ozolins, E. Linina, K. Osmanis, I. Osmanis","doi":"10.1109/IC3D53758.2021.9687169","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687169","url":null,"abstract":"In this work we describe the concept of a stereoscopic multi-focal head mounted display for augmented reality applications as a means of mitigating the vergence-accommodation conflict (VAC).Investigated are means of a practical implementation of data transfer between the rendering station and direct control logic within the headset. We rely on a DisplayPort connection as a means to transfer the necessary multi-focal image packets in real time, whilst at the receiving end – the control logic is based on an FPGA architecture responsible for decoding the DisplayPort stream and reformatting data according to the optical layout of the display. Within the design we have chosen to omit local frame buffering which potentially can result in misrepresented data, nevertheless, this approach gains a latency reduction of about 16 ms as opposed to single-frame buffering.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129315786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Photogrammetric Reconstruction to Immersive VR Environment 从摄影测量重建到沉浸式VR环境
2021 International Conference on 3D Immersion (IC3D) Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687232
M. Lhuillier
{"title":"From Photogrammetric Reconstruction to Immersive VR Environment","authors":"M. Lhuillier","doi":"10.1109/IC3D53758.2021.9687232","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687232","url":null,"abstract":"There are several steps to generate a VR environment from images: choose experimental conditions (scene, camera, trajectory, weather), take the images, reconstruct a textured 3D model thanks to a photogrammetry software, and import the 3D model into a game engine. This paper focuses on a post-processing of the photogrammetry step, mostly for outdoor environments that cannot be reconstructed by UAV. Since VR needs a 3D model in a good coordinate system (with a right scale and an axis that is vertical), a simple method is proposed to compute this. In the experiments, we first reconstruct both urban and natural immersive environments by using a helmet-held Gopro Max 360 camera, then import into Unity the 3D models in good coordinate systems, last explore the scenes like a pedestrian thanks to an Oculus Quest.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131378459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of DIBR-based view synthesis with kinect azure kinect azure中基于dibr的视图合成性能分析
2021 International Conference on 3D Immersion (IC3D) Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687195
Yupeng Xie, André Souto, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, G. Lafruit
{"title":"Performance analysis of DIBR-based view synthesis with kinect azure","authors":"Yupeng Xie, André Souto, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, G. Lafruit","doi":"10.1109/IC3D53758.2021.9687195","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687195","url":null,"abstract":"DIBR (Depth Image Based Rendering) can synthesize Free Navigation virtual views with sparse multiview texture images and corresponding depth maps. There are two ways to obtain depth maps: through software or depth sensors, which is a trade-off between precision versus speed (computational cost and processing time). This article compares the performance of depth maps estimated by MPEG-I’s Depth Estimation Reference Software with that acquired by Kinect Azure. We use IV-PSNR to evaluate their depth maps-based virtual views for the objective comparison. The quality metric with Kinect Azure regularly stay around 32 dB, and its active depth maps yields view synthesis results with better subjective performance in low-textured areas than DERS. Hence, we observe a worthy trade-off in depth performance between Kinect Azure and DERS, but with an advantage of negligible computational cost from the former. We recommend the Kinect Azure for real-time DIBR applications.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133143852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信