Francesco Chiossi, Robin Welsch, Steeven Villa, Lewis L. Chuang, Sven Mayer
{"title":"Designing a Physiological Loop for the Adaptation of Virtual Human Characters in a Social VR Scenario","authors":"Francesco Chiossi, Robin Welsch, Steeven Villa, Lewis L. Chuang, Sven Mayer","doi":"10.1109/VRW55335.2022.00140","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00140","url":null,"abstract":"Social virtual reality is getting mainstream not only for entertainment purposes but also for productivity and education. This makes the design of social VR scenarios functional to support the operator's performance. We present a physiologically-adaptive system that optimizes for visual complexity in a dual-task scenario based on electrodermal activity. Specifically, we propose a system that adapts the amount of non-player characters while jointly performing an N-Back task (primary) and visual detection task (secondary). Our preliminary results show that when optimizing the complexity of the secondary task, users report an improved user experience.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116724847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adding Difference Flow between Virtual and Actual Motion to Reduce Sensory Mismatch and VR Sickness while Moving","authors":"Kwan Yun, G. Kim","doi":"10.1109/VRW55335.2022.00257","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00257","url":null,"abstract":"Enjoying Virtual Reality in vehicles presents a problem because of the sensory mismatch and sickness. While moving, the vestibular sense perceives actual motion in one direction, and the visual sense, visual motion in another. We propose to zero out such physiological mismatch by mixing in motion information as computed by the difference between those of the actual and virtual, namely, “Difference” flow. We present the system for computing and visualizing the difference flow and validate our approach through a small pilot field experiment. Although tested only with a low number of subjects, the initial results are promising.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Factors Associated with Retention in Computer Science Using Virtual Reality","authors":"Vidya Gaddy, F. Ortega","doi":"10.1109/VRW55335.2022.00062","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00062","url":null,"abstract":"In this research, the goal was to dissect the main attributes associated with student engagement in introductory Computer Science (CS) courses. A Virtual Reality simulation and survey were designed. Results indicated that there was a strong positive reaction to goal orientation, and a strong negative reaction to demographic characteristics.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"954 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126994970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Keynote Speaker: The Hitchhiker's Guide to the Metaverse","authors":"P. Hui","doi":"10.1109/VRW55335.2022.00049","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00049","url":null,"abstract":"We envision in the future the virtual world will mix and co-exist with the physical world in such an immersive way that we cannot tell what is real and what is virtual. We will live and interact with the virtual objects that are blended into our environments with advanced holographic technology or with high-quality head mounted displays and lose the virtuality boundary. We call such a new reality Surreality. Our vision of “metaverse” is a multi-world. There are multiple virtual worlds developed by different technology companies and there is also the Surreality where real and virtual merged. While the metaverse may seem futuristic, catalysed by emerging technologies such as Extended Reality, 5G, and Artificial Intelligence, the digital “big bang” of our cyberspace is not far away. This talk aims to offer a comprehensive framework that examines the latest metaverse development under the dimensions of state-of-the-art technologies and metaverse ecosystems, illustrates the possibility of the digital “big bang”, and propose a concrete research agenda for the development of the metaverse. Reality will die; long live Surreality.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123356102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Challenges and Opportunities for Playful Technology in Health Prevention: Using Virtual Reality to Supplement Breastfeeding Education","authors":"Kymeng Tang, K. Gerling, L. Geurts","doi":"10.1109/VRW55335.2022.00088","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00088","url":null,"abstract":"Playful technology offers the opportunity to engage users, convey knowledge and prompt reflection. We built on this potential and designed a VR simulation to give parents-to-be insights into the lived breastfeeding experience. An evaluation with 10 participants revealed that users appreciated the system but perceived similarity between the simulation and games, leading to conflicting expectations. Reflecting on this, we outline challenges for playful VR simulation design for healthcare contexts.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127684107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Local Free-View Neural 3D Head Synthesis for Virtual Group Meetings","authors":"Sebastian Rings, Frank Steinicke","doi":"10.1109/VRW55335.2022.00075","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00075","url":null,"abstract":"Virtual group meetings provide enormous potential for remote group communication. However, today's video conferences incur numer-ous challenges compared to face-to-face meetings. For instance, perception of correct gaze, deictic relations, or eye-to-eye contact is impeded due to the fact that the camera is offset from the eyes of the other users' avatars and that the gallery view is different for each group member. In this paper, we describe how 3D neural heads can be synthesized to overcome these limitations. Therefore, we generate different head poses using a generative adversarial network for a given source image frame using state-of-the-art technology. These head poses can then be viewed in a local space to freely control the gaze of the head poses. We introduce and discuss five use cases for these synthesized head poses that aim to improve intelligent agents and virtual avatar representations in regular video group meetings.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127995207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ragdoll Recovery: Manipulating Virtual Mannequins to Aid Action Sequence Proficiency","authors":"Paul Watson, Swen E. Gaudl","doi":"10.1109/VRW55335.2022.00090","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00090","url":null,"abstract":"In this paper, we present a Virtual Reality (VR) prototype to support the demonstration and practice of the First Aid recovery position. When someone is unconscious and awaiting medical attention, they are placed in the recovery position to keep their airways clear. The recovery position is a commonly taught action sequence for medical professionals and trained first-aiders across industries. VR is a potential pathway for recovery position training as it can deliver spatial information of a demonstrated action for a subsequent copy. However, due to limits of physical interaction with virtual avatars, the practice of this motor sequence is normally performed in the real world on training partners and body mannequins. This limits remote practice, a key strength of any digital, educational resource. We present Ragdoll Recovery (RR), a VR prototype designed to aid training of the recovery position through avatar demonstration and virtual practice mannequins. Users can view the recovery position sequence by walking around two demonstrator avatars. Observed motor skill sequence can then be practised on a virtual mannequin that uses ragdoll physics for realistic and real-time limb behaviour. RR enables remote access to motor skill training that bridges the gap between knowledge of a demonstrated action sequence and real-world performance. We aim to use this prototype to test the viability of action sequence training within a VR educational space.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127963024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karolina Buchta, Piotr Wójcik, Mateusz Pelc, Agnieszka Górowska, Duarte Mota, Kostiantyn Boichenko, Konrad Nakonieczny, K. Wrona, Marta Szymczyk, Tymoteusz Czuchnowski, Justyna Janicka, Damian Galuszka, Radoslaw Sterna, Magdalena Igras-Cybulska
{"title":"NUX IVE - a research tool for comparing voice user interface and graphical user interface in VR","authors":"Karolina Buchta, Piotr Wójcik, Mateusz Pelc, Agnieszka Górowska, Duarte Mota, Kostiantyn Boichenko, Konrad Nakonieczny, K. Wrona, Marta Szymczyk, Tymoteusz Czuchnowski, Justyna Janicka, Damian Galuszka, Radoslaw Sterna, Magdalena Igras-Cybulska","doi":"10.1109/VRW55335.2022.00342","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00342","url":null,"abstract":"A trend of using natural interaction such us speech is clearly visible in human-computer interaction, while in interactive virtual environments (IVE) still it has not become a common practice. Most of input interface elements are graphical and usually they are im-plemented as non-diegetic 2D boards hanging in 3D space. Such holographic interfaces are usually hard to learn and operate, espe-cially for inexperienced users. We have observed a need to explore the potential of using multimodal interfaces in VR and conduct the systematic research that compare the interaction mode in order to optimize the interface and increase the quality of user experience (UX). We introduce a new IVE designed to compare the user inter-action between the mode with traditional graphical user interface (GUI) with the mode in which every element of interface is replaced by voice user interface (VUI). In each version, four scenarios of interaction with a virtual assistant in a sci-fi location are implemented using Unreal Engine, each of them lasting several minutes. The IVE is supplemented with tools for automatic generating reports on user behavior (clicktracking, audiotracking and eyetracking) which makes it useful for UX and usability studies.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127994699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soroosh Mortezapoor, Christian Schönauer, Julien Rüggeberg, H. Kaufmann
{"title":"Photogrammabot: An Autonomous ROS-Based Mobile Photography Robot for Precise 3D Reconstruction and Mapping of Large Indoor Spaces for Mixed Reality","authors":"Soroosh Mortezapoor, Christian Schönauer, Julien Rüggeberg, H. Kaufmann","doi":"10.1109/VRW55335.2022.00033","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00033","url":null,"abstract":"Precise 3D reconstruction of environments and real objects for Mixed-Reality applications can be burdensome. Photogrammetry can help to create accurate representations of actual objects in the virtual world using a high number of photos of a subject or an environment. Photogrammabot is an affordable mobile robot that facilitates photogrammetry and 3D reconstruction by autonomously and systematically capturing images. It explores an unknown indoor environment and uses map-based localization and navigation to maintain camera direction at different shooting points. Photogrammabot employs a Raspberry Pi 4B and Robot Operating System (ROS) to control the exploration and capturing processes. The photos are taken using a point-and-shoot camera mounted on a 2-DOF micro turret to enable photography from different angles and compensate for possible robot orientation errors to ensure parallel photos. Photogrammabot has been designed as a general solution to facilitate precise 3D reconstruction of unknown environments. In addition we developed tools to integrate it with and extend the Immersive Deck™ MR system [23], where it aids the setup of the system in new locations.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133933970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extended Reality and Internet of Things for Hyper-Connected Metaverse Environments","authors":"Jie Guan, Jay Irizawa, Alexis Morris","doi":"10.1109/VRW55335.2022.00043","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00043","url":null,"abstract":"The Metaverse encompasses technologies related to the internet, virtual and augmented reality, and other domains toward smart interfaces that are hyper-connected, immersive, and engaging. However, Metaverse applications face inherent disconnects between virtual and physical components and interfaces. This work explores how an Extended Metaverse framework can be used to increase the seamless integration of interoperable agents between virtual and physical environments. It contributes an early theory and practice toward the synthesis of virtual and physical smart environments anticipating future designs and their potential for connected experiences.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134048503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}