2019 International Conference on 3D Immersion (IC3D)最新文献

筛选
英文 中文
IC3D 2019 Message from General Chair IC3D 2019总主席致辞
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/ic3d48390.2019.8975898
{"title":"IC3D 2019 Message from General Chair","authors":"","doi":"10.1109/ic3d48390.2019.8975898","DOIUrl":"https://doi.org/10.1109/ic3d48390.2019.8975898","url":null,"abstract":"","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125516938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistent Long Sequences Deep Faces 一致的长序列深面
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975999
Xudong Fan, Daniele Bonatto, G. Lafruit
{"title":"Consistent Long Sequences Deep Faces","authors":"Xudong Fan, Daniele Bonatto, G. Lafruit","doi":"10.1109/IC3D48390.2019.8975999","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975999","url":null,"abstract":"Face swapping in videos usually has strong entertainment applications. Deep Fakes (in Faces) are a recent topic in deep learning where the main idea is to substitute the face of a person in a video with the face of another person. But one of the drawbacks of the method is that between two successive frames there are inconsistencies between the faces, such as changing face color, flickering or eyebrows that change. In this paper, we propose a convolutional neural network for swapping faces based on two autoencoders which share the same encoder. In this network, the encoder can distinguish and extract important features of faces, including facial expressions and poses; the decoders will then reconstruct faces according to these features. First, we will generate datasets of faces respectively for person A and person B. Secondly, the local information of two faces is sent to the network to get the model; after the training process, we can use the model to reconstruct the corresponding face of person B when the input is one face of person A. Afterwards, we build a binary mask to select the face area and transfer color from the source face to the target face. Finally, we only need to use a seamless clone to merge the new faces back into the source frames to create a fake video. The experimental results show that the quality of the fake videos is improved significantly.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Algebaric Variety Based Model for High Quality Free-Viewpoint View Synthesis on a Krylov Subspace 基于代数变化的Krylov子空间上高质量自由视点视图综合新模型
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975992
Mansi Sharma, Gowtham Ragavan, B. Arathi
{"title":"A Novel Algebaric Variety Based Model for High Quality Free-Viewpoint View Synthesis on a Krylov Subspace","authors":"Mansi Sharma, Gowtham Ragavan, B. Arathi","doi":"10.1109/IC3D48390.2019.8975992","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975992","url":null,"abstract":"This paper presents a new depth-image-based rendering algorithm for free-viewpoint 3DTV applications. The cracks, holes, ghost countors caused by visibility, disocclusion, resampling problems associated with 3D warping lead to serious rendering artifacts in synthesized virtual views. This challenging problem of hole filling is formulated as an algebraic matrix completion problem on a higher dimensional space of monomial features described by a novel variety model. The high-level idea of this work is to exploit the linear or nonlinear structures of the data and interpolate missing values by solving algebraic varieties associated with Hankel matrices as a member of Krylov subspace. The proposed model effectively handles artifacts appear in wide-baseline spatial view interpolation and arbitrary camera movements. Our model has a low runtime and results excel with state-of-the-art methods in quantitative and qualitative evaluation.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116729198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Objective and Subjective Assessment of Binocular Disparity for Projection-Based Light Field Displays 基于投影的光场显示双目视差的客观与主观评价
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975997
P. A. Kara, R. R. Tamboli, A. Cserkaszky, A. Barsi, Anikó Simon, Agnes Kusz, L. Bokor, M. Martini
{"title":"Objective and Subjective Assessment of Binocular Disparity for Projection-Based Light Field Displays","authors":"P. A. Kara, R. R. Tamboli, A. Cserkaszky, A. Barsi, Anikó Simon, Agnes Kusz, L. Bokor, M. Martini","doi":"10.1109/IC3D48390.2019.8975997","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975997","url":null,"abstract":"Light field displays offer glasses-free 3D visualization, as observers do not need any viewing device to see the content in 3D. The angular resolution of such displays not only determines the achievable smoothness of the parallax effect, but also shapes the valid viewing area of light field visualization; higher angular resolutions support greater viewing distances. Therefore, the binocular disparity of a light field display with a given angular resolution lessens, fades away as the viewing distance increases, and the once true 3D visualization slowly becomes perceptually equivalent to a common 2D projection. However, as the current use case scenarios of light field technology define relatively close observations, this topic is rather under-investigated. In this paper, we address the binocular disparity of projection-based light field displays. The results of objective and subjective studies are presented, in which multiple viewing distances were used to evaluate binocular disparity. Beyond the separate models, the paper analyzes the correlations between them and discusses potential applications for future use cases.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123849966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Relating Eye Dominance to Neurochemistry in the Human Visual Cortex Using Ultra High Field 7-Tesla MR Spectroscopy 利用超高场7-特斯拉磁共振光谱研究人眼优势与人类视觉皮层神经化学的关系
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8976001
I. B. Ip, C. Lunghi, U. Emir, A. Parker, H. Bridge
{"title":"Relating Eye Dominance to Neurochemistry in the Human Visual Cortex Using Ultra High Field 7-Tesla MR Spectroscopy","authors":"I. B. Ip, C. Lunghi, U. Emir, A. Parker, H. Bridge","doi":"10.1109/IC3D48390.2019.8976001","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8976001","url":null,"abstract":"Although our view of the world looks singular, it is combined from each eye’s separate retinal image. If the balanced input between eyes is disrupted during early childhood, visual acuity and stereoscopic depth perception are impaired. This is because one eye dominates over the other, causing a neurological condition called ‘amblyopia’ [1]. In the normal, healthy visual system, the balance between eyes can be determined using various methods to provide a measure of ‘eye dominance’. Eye dominance is the preference for using image from one eye over another [2], suggesting that the visual system applies different weights upon their input. Hence, eye dominance is relevant for understanding the mechanisms underlying binocular vision. As an investigative strategy to understand the binocular visual system in health in disease, we want to characterize eye dominance in the normal visual system. This information can then be used to serve as a baseline to compare to extreme eye dominance in ‘amblyopia’. Specifically, we ask to which degree variations in eye dominance are related to visual cortex concentrations of major excitatory neurotransmitter and metabolite glutamate (‘Glu’) and inhibitory neurotransmitter γ-aminobutyric acid (‘GABA’). Their relationship is formalised as the ‘Glu/GABA’ ratio. 13 participants took part in a 1-h psychophysical experiment to quantify eye dominance and a separate 1.5-h 7-Tesla MRI brain scan to measure hemodynamic and neurochemical responses during visual stimulation. The degree of eye dominance was predicted by the inter-ocular difference in V1 Glu/GABA balance. Stronger eye dominance correlated with an increase in inhibition during dominant relative to non-dominant eye viewing (r = −0.647, p = 0.023). In contrast the hemodynamic response, measured with functional magnetic resonance imaging, did not correlate with eye dominance. Our findings suggest that normally occurring eye dominance is associated with the balance of neurochemicals in the early visual cortex.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121591116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3D Convolutional Neural Network for Light Field Depth Estimation 基于三维卷积神经网络的光场深度估计
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975996
Ágota Faluvégi, Quentin Bolsée, S. Nedevschi, V. Dădârlat, A. Munteanu
{"title":"A 3D Convolutional Neural Network for Light Field Depth Estimation","authors":"Ágota Faluvégi, Quentin Bolsée, S. Nedevschi, V. Dădârlat, A. Munteanu","doi":"10.1109/IC3D48390.2019.8975996","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975996","url":null,"abstract":"Depth estimation has always been a great challenge in the field of computer vision and machine learning. There is a rich literature focusing on depth estimation in stereo vision or in monocular imaging, while the domain of depth estimation in light field images is still in its infancy. The paper proposes a fully convolutional 3D neural network that estimates the disparity in light field images. The proposed method is parametric as it is able to adapt to input images of arbitrary size and it is lightweight and less prone to overfitting thanks to its fully convolutional nature. The experiments reveal competitive results against the state of the art, demonstrating the potential offered by deep learning solutions for disparity estimation in light field images.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133481161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Cybersickness and Evaluation of a Remediation System: A Pilot Study 晕动症和补救系统的评估:一项试点研究
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975989
L. André, R. Coutellier
{"title":"Cybersickness and Evaluation of a Remediation System: A Pilot Study","authors":"L. André, R. Coutellier","doi":"10.1109/IC3D48390.2019.8975989","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975989","url":null,"abstract":"Many questions arise regarding the use of virtual reality (VR) in the naval military field. This is particularly the case from a human factors’ perspective, where system’s usability is supposed to guarantee user’s performance. Cybersickness, resulting from a sensory conflict between visual and vestibular systems, is one of the major limitations to the development of VR. Cybersickness can lead to nausea, oculomotor discomfort and disorientation. The major aim of the current study was to evaluate the efficiency of a remediation system for cybersickness. This system is designed to help remove the sensory conflict, thanks to miniature LED screens placed in the head mounted display (HMD). 18 subjects were confronted with a dynamic environment in VR, equipped with HMDs. Different physiological variables were measured during immersion. Every subject showed effects of cybersickness, starting, on average, after eight minutes of exposure, even if the system may reduce symptoms under certain conditions.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131320548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Frame-Wise CNN-Based View Synthesis for Light Field Camera Arrays 基于逐帧cnn的光场相机阵列视图合成
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975901
I. Schiopu, Patrice Rondao-Alface, A. Munteanu
{"title":"Frame-Wise CNN-Based View Synthesis for Light Field Camera Arrays","authors":"I. Schiopu, Patrice Rondao-Alface, A. Munteanu","doi":"10.1109/IC3D48390.2019.8975901","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975901","url":null,"abstract":"The paper proposes a novel frame-wise view synthesis method based on convolutional neural networks (CNNs) for wide-baseline light field (LF) camera arrays. A novel neural network architecture that follows a multi-resolution processing paradigm is employed to synthesize an entire view. A novel loss function formulation based on the structural similarity index (SSIM) is proposed. A wide-baseline LF image dataset is generated and employed to train the proposed deep model. The proposed method synthesizes each subaperture image (SAI) from a LF image based on corresponding SAIs from two reference LF images. Experimental results show that the proposed method yields promising results with an average PSNR and SSIM of 34.71 dB and 0.9673 respectively for wide baselines.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122265010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Region Dependent Mesh Refinement for Volumetric Video Workflows 基于区域的体积视频工作流网格细化
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975991
Rodrigo Diaz, Aurela Shehu, I. Feldmann, O. Schreer, P. Eisert
{"title":"Region Dependent Mesh Refinement for Volumetric Video Workflows","authors":"Rodrigo Diaz, Aurela Shehu, I. Feldmann, O. Schreer, P. Eisert","doi":"10.1109/IC3D48390.2019.8975991","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975991","url":null,"abstract":"This paper addresses high quality mesh optimization for volumetric video. Real persons are captured with multiple cameras and converted to 3D mesh sequences. These volumetric video assets can be used as dynamic 3D objects in arbitrary 3D rendering engines. In this way, 3D representations of real persons are achieved with a high level of detail and realism. Target use cases are augmented reality, virtual reality and mixed reality applications. However, the final rendering quality strongly depends on the hardware capabilities of the target rendering device. In this context, a novel region dependent mesh refinement approach is presented and evaluated with respect to existing workflows. The proposed approach is used in order to obtain a low overall polygon count while keeping details in semantically important regions such as human faces. It combines conventional 2D skin and face detection algorithms and transfers the results to the 3D domain. Further on, a dedicated camera region selection approach is presented which enhances the sharpness and quality of the resulting 3D texture mappings.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132621804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Approach for Multi-View 3D HDR Content Generation via Depth Adaptive Cross Trilateral Tone Mapping 一种基于深度自适应交叉三边色调映射的多视角3D HDR内容生成新方法
2019 International Conference on 3D Immersion (IC3D) Pub Date : 2019-12-01 DOI: 10.1109/IC3D48390.2019.8975988
Mansi Sharma, M. S. Venkatesh, Gowtham Ragavan, Rohan Lal
{"title":"A Novel Approach for Multi-View 3D HDR Content Generation via Depth Adaptive Cross Trilateral Tone Mapping","authors":"Mansi Sharma, M. S. Venkatesh, Gowtham Ragavan, Rohan Lal","doi":"10.1109/IC3D48390.2019.8975988","DOIUrl":"https://doi.org/10.1109/IC3D48390.2019.8975988","url":null,"abstract":"In this work, we proposed a novel depth adaptive tone mapping scheme for stereo HDR imaging and 3D display. We are interested in the case where different exposures are taken from different viewpoints. The scheme employed a new depth-adaptive cross-trilateral filter (DA-CTF) for recovering High Dynamic Range (HDR) images from multiple Low Dynamic Range (LDR) images captured at different exposure levels. Explicitly leveraging additional depth information in the tone mapping operation correctly identify global contrast change and detail visibility change by preserving the edges and reducing halo artifacts in the synthesized 3D views by depth-image-based rendering (DIBR) procedure. The experiments show that the proposed DA-CTF and DIBR scheme outperforms state-of-the-art operators in the enhanced depiction of tone mapped HDR stereo images on LDR displays.","PeriodicalId":344706,"journal":{"name":"2019 International Conference on 3D Immersion (IC3D)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128059548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信