{"title":"Evaluating VR Sickness in VR Locomotion Techniques","authors":"T. V. Gemert, Joanna Bergström","doi":"10.1109/VRW52623.2021.00078","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00078","url":null,"abstract":"VR Sickness is a form of motion sickness in Virtual Reality that affects 25-60% of the population. It is typically caused by exposure to mismatches between real and virtual motion, which happens in most VR Locomotion techniques. Hence, VR Locomotion and VR Sickness are intimately related, but this relationship is not reflected in the state of VR Sickness assessment. In this work we highlight the importance of understanding and quantifying VR Sickness in VR locomotion research. We discuss the most important factors and measures of VR to develop VR Sickness as a meaningful metric for VR Locomotion.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116506136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiquan Liu, Baiqi Lai, Cheng Wang, Xuesheng Bian, Chenglu Wen, Ming Cheng, Yu Zang, Yan Xia, Jonathan Li
{"title":"Matching 2D Image Patches and 3D Point Cloud Volumes by Learning Local Cross-domain Feature Descriptors","authors":"Weiquan Liu, Baiqi Lai, Cheng Wang, Xuesheng Bian, Chenglu Wen, Ming Cheng, Yu Zang, Yan Xia, Jonathan Li","doi":"10.1109/VRW52623.2021.00140","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00140","url":null,"abstract":"Establishing the relationship of 2D images and 3D point clouds is a solution to establish the spatial relationship between 2D and 3D space, i.e. AR virtual-real registration. In this paper, we propose a network, 2D3D-GAN-Net, to learn the local invariant cross-domain feature descriptors of 2D image patches and 3D point cloud volumes. Then, the learned local invariant cross-domain feature descriptors are used for matching 2D images and 3D point clouds. The Generative Adversarial Networks (GAN) is embedded into the 2D3D-GANNet, which is used to distinguish the source of the learned feature descriptors, facilitating the extraction of invariant local cross-domain feature descriptors. Experiments show that the local cross-domain feature descriptors learned by 2D3D-GAN-Net are robust, and can be used for cross-dimensional retrieval on the 2D image patches and 3D point cloud volumes dataset. In addition, the learned 3D feature descriptors are used to register the point cloud for demonstrating the robustness of learned local cross-domain feature descriptors.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"412 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126588254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nan Sun, Annette C. Feng, Ryan Patton, Y. Gingold, W. Lages
{"title":"Programmable Virtual Reality Environments","authors":"Nan Sun, Annette C. Feng, Ryan Patton, Y. Gingold, W. Lages","doi":"10.1109/VRW52623.2021.00192","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00192","url":null,"abstract":"We present a programmable virtual environment that allows users to create and manipulate 3D objects via code while inside virtual reality. Our environment supports the control of 3D transforms, physical, and visual properties. Programming is done by means of a custom visual block-language that is translated into Lua language scripts. We believed that the direction of this project will benefit computer science education in helping students to learn programming and spatial thinking more efficiently.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125694535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer Brade, Alexander Kögel, Benjamin Schreiber, Franziska Klimant
{"title":"Do materials matter? How surface representation affects presence in virtual environments","authors":"Jennifer Brade, Alexander Kögel, Benjamin Schreiber, Franziska Klimant","doi":"10.1109/VRW52623.2021.00214","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00214","url":null,"abstract":"This article reports the impact of different visual realism of materials applied to objects on perceived presence during an assembly task. The results of the experiment show that there is a significant difference between the more realistic scene and one where the surfaces of objects have been replaced with simpler, CAD-inspired visualizations. Despite these difference, both scenarios reach high values for presence and acceptance. Therefore, less detailed and less realistic rendering of surfaces might be sufficient to obtain a high presence and acceptance level in scenarios, which focus on manual tasks, if the associated drop in presence can be tolerated.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126359795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rajkumar Darbar, Joan Odicio-Vilchez, Thibault Lainé, Arnaud Prouzeau, M. Hachet
{"title":"Text Selection in AR-HMD Using a Smartphone as an Input Device","authors":"Rajkumar Darbar, Joan Odicio-Vilchez, Thibault Lainé, Arnaud Prouzeau, M. Hachet","doi":"10.1109/VRW52623.2021.00145","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00145","url":null,"abstract":"Text selection is a common task while reading a PDF file or browsing the web. Efficient text selection techniques exist on desktops and touch devices, but are still under-explored for Augmented Reality Head Mounted Display (AR-HMD). Performing text selection in AR commonly uses hand-tracking, voice commands, and eye/head-gaze, which are cumbersome and lack precision. In this poster paper, we explore the use of a smartphone as an input device to support text selection in AR-HMD because of its availability, familiarity, and social acceptability. As an initial attempt, we propose four eyes-free, uni-manual text selection techniques for AR-HMD, all using a smartphone - continuous touch, discrete touch, spatial movement, and raycasting.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121499662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximilian Kaluschke, Myat Su Yin, P. Haddawy, N. Srimaneekarn, P. Saikaew, G. Zachmann
{"title":"A Shared Haptic Virtual Environment for Dental Surgical Skill Training","authors":"Maximilian Kaluschke, Myat Su Yin, P. Haddawy, N. Srimaneekarn, P. Saikaew, G. Zachmann","doi":"10.1109/VRW52623.2021.00069","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00069","url":null,"abstract":"Online learning has become an effective approach to reach students who may not be able to travel to university campuses for various reasons. Its use has also dramatically increased during the current COVID-19 pandemic with social distancing and lockdown requirements. But online education has thus far been primarily limited to teaching of knowledge and cognitive skills. There is yet almost no use of online education for teaching of physical clinical skills.In this paper, we present a shared haptic virtual environment for dental surgical skill training. The system provides the teacher and student with a shared environment containing a virtual dental station with patient, a dental drill controlled by a haptic device, and a drillable tooth. It also provides automated scoring of procedure outcomes. We discuss a number of optimizations used in order to provide the high-fidelity simulation and real-time performance needed for training of high-precision clinical skills. Since tactile, in particular kinaesthetic, sense is essential in carrying out many dental procedures, an important question is how to best teach this in a virtual environment. In order to support exploring this, our system includes three modes for transmitting haptic sensations from the user performing the procedure to the user observing.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127725300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RED: A Real-Time Datalogging Toolkit for Remote Experiments","authors":"Sam Adeniyi, Evan Suma Rosenberg, Jerald Thomas","doi":"10.1109/VRW52623.2021.00183","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00183","url":null,"abstract":"The ability to conduct experiments on virtual reality systems has become increasingly compelling as the world continues to migrate towards remote research, affecting the feasibility of conducting in-person studies with human participants. The Remote Experiment Datalogger (RED) Toolkit is an open-source library designed to simplify the administration of remote experiments requiring continuous real-time data collection. Our design consists of a REST server, implemented using the Flask framework, and a client API for transparent integration with multiple game engines. We foresee the RED Toolkit serving as a building block for the handling of future remote experiments across a multitude of circumstances.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130625728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitar Valkov, S. Thiele, Karim Huesmann, E. Gebauer, B. Risse
{"title":"Touch Recognition on Complex 3D Printed Surfaces using Filter Response Analysis","authors":"Dimitar Valkov, S. Thiele, Karim Huesmann, E. Gebauer, B. Risse","doi":"10.1109/VRW52623.2021.00043","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00043","url":null,"abstract":"Touch sensing on various surfaces has played a prominent role in human-computer interaction in the last decades. However, current technologies are mostly suited for flat or sufficiently smooth surfaces and touch sensing on complex geometries remains a challenging task, especially when the sensing hardware needs to be embedded into the interactive object. In this paper, we introduce a novel sensing approach based on the observation that conductive materials and the user’s hand or finger can be considered a complex filter system with well-conditioned input-output relationships. Different hand postures can be disambiguated by mapping the response of these filters using an intentionally small convolutional neural network. Our experiments show that even straight-forward electrode geometries provided by common 3D printers and filaments can be used to achieve high accuracy, rendering expressive interactions with complex 3D shapes possible while allowing to integrate the touch surface directly into the interactive object. Ultimately, our low-cost and versatile sensing approach enables rich interaction on a variety of objects and surfaces which is demonstrated through a series of illustrative experiments.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132817048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revisiting the Scene-Graph-as-Bus Concept: Inter-networking Heterogeneous Applications Using glTF Fragments","authors":"Jaspreet Singh Dhanjan, A. Steed","doi":"10.1109/VRW52623.2021.00068","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00068","url":null,"abstract":"While there are now many examples of successful collaborative mixed reality applications, each application uses its own custom networking framework and applications rarely inter-operate. To enable much larger-scale distributed systems, we will need inter-networking protocols that allow heterogeneous applications to exchange data. We demonstrate a proof of concept implementation that revisits the concept of using a scene-graph as a bus. That is, sharing low-level geometry and rendering information, rather than high-level semantic events. Our networking protocol uses glTF fragments and edits to express scene changes. We use the proof of concept to explore the potential to inter-network very different applications that are based on different underlying graphics engine technology.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132144239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Junker, Carl Hutters, Daniel Reipur, Lasse Embøl, Ali Adjorlu, R. Nordahl, Stefania Serafin, Daniel Thaysen Petersen, A. Fink-Jensen
{"title":"Fighting Alcohol Craving Using Virtual Reality: the Role of Social Interaction","authors":"Andreas Junker, Carl Hutters, Daniel Reipur, Lasse Embøl, Ali Adjorlu, R. Nordahl, Stefania Serafin, Daniel Thaysen Petersen, A. Fink-Jensen","doi":"10.1109/VRW52623.2021.00054","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00054","url":null,"abstract":"Craving is a cause of relapse in patients suffering from a substance use disorder. Cue-exposure therapy builds on eliciting feelings of craving in patients in safe and controlled environments to condition them to control these feelings. Different efficient and resource-friendly methods of eliciting craving exist, (such as written material, still pictures, etc.). However, these methods create knowledge and skill transfer gaps between therapy sessions and real life scenarios. Virtual reality allows more true-to-life experiences, and research demonstrates its capabilities in eliciting craving in patients. Studies have identified different environments that elicits craving, suggesting bars to be one of the most effective ones. Research also suggests the presence of others to be an effective method of eliciting craving in users. However, the effect of social interaction has not yet been explored. Therefore, this paper presents a virtual bar with the purpose to investigate whether social interaction affects alcohol craving in users. The VR intervention is designed with close cooperation with a psychiatrist experienced in working with individuals suffering from alcohol use disorder. In this paper, we present the designed and developed VR intervention and discuss how an experiment can be conducted after the COVID-19 shutdowns.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132212231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}