Cheng Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, A. S. Won
{"title":"RelivelnVR: Capturing and Reliving Virtual Reality Experiences Together","authors":"Cheng Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, A. S. Won","doi":"10.1109/VR.2019.8798363","DOIUrl":"https://doi.org/10.1109/VR.2019.8798363","url":null,"abstract":"We present a new type of sharing VR experience over distance which allows people to relive their recorded experience in VR together. We describe a pilot study examining the user experience when people share their VR experience together remotely. Finally, we discuss the implications for sharing VR experiences over time and space.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127804254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Educational Augmented Reality Application for Elementary School Students Focusing on the Human Skeletal System","authors":"M. Kouzi, Abdihakim Mao, Diego Zambrano","doi":"10.1109/VR.2019.8798058","DOIUrl":"https://doi.org/10.1109/VR.2019.8798058","url":null,"abstract":"Augmented Reality (AR) as a new field regarding Human Computing Interaction (HCI) has been gaining momentum in the last few years. Being able to project interactive graphics into real-life environments can be applied in various fields, research and commercial goals. In the field of education, textbooks are still considered to be the primary tool used by students to learn about new topics. Since AR requires interaction and exploration, it brings a ludic component that is hard to replicate using regular textbooks. The application we developed allows elementary school students to interact with a fully three-dimensional human skeleton model, using specialized virtual buttons. Students can understand this complex structure and learn the names of important bones just by using a tablet, a picture and their hands. Results show that the majority of students consider that our AR application helped them visualize and learn more about the human skeletal system. Additionally, the data we gathered shows that there was a 16% increase in correct responses regarding bone names after using our AR application. Our AR application successfully helped the students learn about the human skeletal system by introducing them to AR technologies.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127969147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VR-MOOCs: A Learning Management System for VR Education","authors":"Hyundo Kim, Sukgyu Nah, Jaeyoung Oh, H. Ryu","doi":"10.1109/VR.2019.8798106","DOIUrl":"https://doi.org/10.1109/VR.2019.8798106","url":null,"abstract":"This demonstration position paper introduces a first of its kind - VR MOOC LMS. The chemistry experiment VR content is running for the students, and a supervisor can monitor their learning performance and interaction behaviors. Our LMS system (local view, world view and multi-view user interfaces) for the VR MOOC system is expected to shed light on how the interactive VR learning content can be affiliated to the proper instructional design in the near future.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125837044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Nakano, K. Kiyokawa, Daichi Horita, Keiji Yanai, Nobuchika Sakata, Takuji Narumi
{"title":"Enchanting Your Noodles: GAN-based Real-time Food-to-Food Translation and Its Impact on Vision-induced Gustatory Manipulation","authors":"K. Nakano, K. Kiyokawa, Daichi Horita, Keiji Yanai, Nobuchika Sakata, Takuji Narumi","doi":"10.1109/VR.2019.8798336","DOIUrl":"https://doi.org/10.1109/VR.2019.8798336","url":null,"abstract":"We propose a novel gustatory manipulation interface which utilizes the cross-modal effect of vision on taste elicited with augmented reality (AR)-based real-time food appearance modulation using a generative adversarial network (GAN). Unlike existing systems which only change color or texture pattern of a particular type of food in an inflexible manner, our system changes the appearance of food into multiple types of food in real-time flexibly, dynamically and interactively in accordance with the deformation of the food that the user is actually eating by using GAN-based image-to-image translation. The experimental results reveal that our system successfully manipulates gustatory sensations to some extent and that the effectiveness depends on the original and target types of food as well as each user's food experience.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128268296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin
{"title":"Human Face Reconstruction under a HMD Occlusion","authors":"Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin","doi":"10.1109/VR.2019.8797959","DOIUrl":"https://doi.org/10.1109/VR.2019.8797959","url":null,"abstract":"With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct users face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114638551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Po Kong Lai, Shuang Xie, J. Lang, Robert Laqaruère
{"title":"Real-Time Panoramic Depth Maps from Omni-directional Stereo Images for 6 DoF Videos in Virtual Reality","authors":"Po Kong Lai, Shuang Xie, J. Lang, Robert Laqaruère","doi":"10.1109/VR.2019.8798016","DOIUrl":"https://doi.org/10.1109/VR.2019.8798016","url":null,"abstract":"In this paper we present an approach for 6 DoF panoramic videos from omni-directional stereo (ODS) images using convolutional neural networks (CNNs). More specifically, we use CNNs to generate panoramic depth maps from ODS images in real-time. These depth maps would then allow for re-projection of panoramic images thus providing 6 DoF to a viewer in virtual reality (VR). As the boundaries of a panoramic image must touch in order to envelope a viewer, we introduce a border weighted loss function as well as new error metrics specifically tailored for panoramic images. We show experimentally that training with our border weighted loss function improves performance by benchmarking a baseline skip-connected encoder-decoder style network as well as other state-of-the-art methods in depth map estimation from mono and stereo images. Finally, a practical application for VR using real world data is also demonstrated.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132381156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Rotation with Visuo-Haptics","authors":"Akihiro Nakamura, Shinji Sakai, K. Shidoji","doi":"10.1109/VR.2019.8798206","DOIUrl":"https://doi.org/10.1109/VR.2019.8798206","url":null,"abstract":"Redirected walking has been proposed as a means of making a narrow space feel wide in a virtual space. We examined the effect of visuo-haptics on the detection threshold of rotation gain when participants walked around a wall. We found that the threshold was affected by visuo-haptics only when they walked around the outside of the wall but was not affected when they walked around the inside of the wall.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131773083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Reinhardt, Steffen Haesler, J. Hurtienne, Carolin Wienrich
{"title":"Entropy of Controller Movements Reflects Mental Workload in Virtual Reality","authors":"Daniel Reinhardt, Steffen Haesler, J. Hurtienne, Carolin Wienrich","doi":"10.1109/VR.2019.8797977","DOIUrl":"https://doi.org/10.1109/VR.2019.8797977","url":null,"abstract":"Virtual Reality can impose cognitive demands on users and influence their task performance. These cognitive demands, however, have been difficult to measure precisely without inducing breaks of presence. Based on findings in psychological science on how motion trajectories reflect underlying cognitive processes, we investigated entropy (i.e. the degree of movement irregularity) as an unobtrusive measure of mental workload. Entropy values were obtained from a time-series history of controller movement data. Mental workload is considered high over a given time interval, when the measured entropy is high as well. By manipulating the difficulty of a simple rhythm game we could show that the results are comparable to the results of the NASA-TLX questionnaire, which is currently used as the gold standard in VR for measuring mental workload. Thus, our results pave the way for further investigating the entropy of controller movements as a precise measurement of mental workload in VR.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131935819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cybersickness Analysis with EEG Using Deep Learning Algorithms","authors":"D. Jeong, Sangbong Yoo, Yun Jang","doi":"10.1109/VR.2019.8798334","DOIUrl":"https://doi.org/10.1109/VR.2019.8798334","url":null,"abstract":"Cybersickness is a symptom of dizziness that occurs while experiencing Virtual Reality (VR) technology and it is presumed to occur mainly by crosstalk between the sensory and cognitive systems. However, since the sensory and cognitive systems cannot be measured objectively, it is difficult to measure cybersickness. Therefore, methodologies for measuring cybersickness have been studied in various ways. Traditional studies have collected answers to questionnaires or analyzed EEG data using machine learning algorithms. However, the system relying on the questionnaires lacks objectivity, and it is difficult to obtain highly accurate measurements with the machine learning algorithms in previous studies. In this work, we apply and compare Deep Neural Network (DNN) and Convolutional Neural Network (CNN) deep learning algorithms for objective cy-bersickness measurement from EEG data. We also propose a data preprocessing for learning and signal quality weights allowing us to achieve high performance while learning EEG data with the deep learning algorithms. Besides, we analyze video characteristics where cybersickness occurs by examining the 360 video stream segments causing cybersickness in the experiments. Finally, we draw common patterns that cause cybersickness.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132974811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tracking-Tolerant Visual Cryptography","authors":"Ruofei Du, Eric Lee, A. Varshney","doi":"10.1109/VR.2019.8797924","DOIUrl":"https://doi.org/10.1109/VR.2019.8797924","url":null,"abstract":"We introduce a novel secure display system, which uses visual cryptography [4] with tolerance for tracking. Our system brings cryptographic privacy from text to virtual worlds [3]. Much like traditional encryption that uses a public key and a private key, our system uses two images that are both necessary for visual decryption of the data. The public image could be widely shared on a printed page, on a traditional display (desktop, tablet, or smartphone), or in a multi-participant virtual world, while the other private image can be exclusively on a user's personal AR or VR display. Only the recipient is able to visually decrypt the data by fusing both images. In contrast to prior art, our system is able to provide tracking tolerance, making it more practically usable in modern VR and AR systems. We model the probability of misalignment caused by head or body jitter as a Gaussian distribution. Our algorithm diffuses the second image using the normalized probabilities, thus enabling the visual cryptography to be tolerant of alignment errors due to tracking.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133850281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}