Ruoxi Guo, Jiahao Cui, Wanru Zhao, Shuai Li, A. Hao
{"title":"Hand-by-Hand Mentor: An AR based Training System for Piano Performance","authors":"Ruoxi Guo, Jiahao Cui, Wanru Zhao, Shuai Li, A. Hao","doi":"10.1109/VRW52623.2021.00100","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00100","url":null,"abstract":"Multimedia instrument training has gained great momentum benefiting from augmented and/or virtual reality (AR/VR) technologies. We present an AR-based individual training system for piano performance that uses only MIDI data as input. Based on fingerings decided by a pre-trained Hidden Markov Model (HMM), the system employs musical prior knowledge to generate natural-looking 3D animation of hand motion automatically. The generated virtual hand demonstrations are rendered in head-mounted displays and registered with a piano roll. Two user studies conducted by us show that the system requires relatively less cognitive load and may increase learning efficiency and quality.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130736010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Théo Combe, J. Chardonnet, F. Mérienne, J. Ovtcharova
{"title":"CAVE vs. HMD in Distance Perception","authors":"Théo Combe, J. Chardonnet, F. Mérienne, J. Ovtcharova","doi":"10.1109/VRW52623.2021.00106","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00106","url":null,"abstract":"This study aims to analyze differences between a CAVE system and a Head-Mounted Display (HMD), two technologies presenting important differences, focusing on distance perception, as past research on this factor is usually carried with only one or the other device. We performed two experiments. First, we explored the impact of the HMD’s weight, by removing any other bias. Second, we compared distance perception using a simple hand interaction in a replicated environment. Results reveal that the HMD’s weight has no significant impact over short distances, and the usage of a virtual replica was found to improve distance perception.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130786752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Junker, Carl Hutters, Daniel Reipur, Lasse Embøl, N. C. Nilsson, Stefania Serafin, Evan Suma Rosenberg
{"title":"Revisiting Audiovisual Rotation Gains for Redirected Walking","authors":"Andreas Junker, Carl Hutters, Daniel Reipur, Lasse Embøl, N. C. Nilsson, Stefania Serafin, Evan Suma Rosenberg","doi":"10.1109/VRW52623.2021.00071","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00071","url":null,"abstract":"In this paper, we present a psychophysical study exploring how spatialized sound affects perceptual detection thresholds for rotation gains during exposure to virtual environments with varying degrees of visibility. The study was based on a 2×3 factorial design, crossing two types of audio (no audio and spatialized audio) and three degrees of visibility (low, medium, and high density fog). We found no notable effects of sound spatialization or visibility on detection thresholds. Although future studies are required to empirically confirm that vision dominates audition, these results provide quantitative evidence that visual rotation gains may be robust to auditory interference. Furthermore, they suggest that rotation gains may be useful even when the virtual environment offers very limited visibility.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132518393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Presence in VR with Self-Representing Auditory-Vibrotactile Input","authors":"Guanghan Zhao, J. Orlosky, Yuuki Uranishi","doi":"10.1109/VRW52623.2021.00171","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00171","url":null,"abstract":"In this paper, we present the results of an experiment testing various pairings of auditory feedback devices on immersion and emotion in Virtual Reality (VR). We investigate the effects of bone conduction headphones, a chest-mounted vibration speaker, headphones, and combinations thereof, in combination with internal (self-representing) sounds and vibrations in two simulated scenarios. Results suggest that certain auditory-vibrotactile inputs can influence immersion in an intense virtual scene and evoke emotions in a relaxing virtual scene. In addition, self-representing sounds were observed to significantly weaken immersion in the relaxing scene.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132962604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AREarthQuakeDrill: Toward Increased Awareness of Personnel during Earthquakes via AR Evacuation Drills","authors":"Kohei Yoshimi, P. Ratsamee, J. Orlosky","doi":"10.1109/VRW52623.2021.00105","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00105","url":null,"abstract":"Evacuation drills are carried out to reduce the injury or death caused by earthquakes. However, the content of evacuation drills is generally predetermined for specific evacuation routes and actions. This inflexibility can reduce user motivation and sincerity.In this paper, we propose an Augmented Reality (AR) based evacuation drill system. We use an optical see-through head-mounted display (HMD) for mapping a room and recognizing interior. Our system constructs an AR drill environment of the real environment, and reproduces the after-effects of an earthquake by applying vibrations to the objects. We evaluated our system in an experiment with 10 participants. Comparing cases with and without AR obstacles, we found that AR training affected participant motivation and the diversity of traversed evacuation routes during practice.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131978048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanseob Kim, Ghazanfar Ali, Seungwon Kim, G. Kim, Jae-In Hwang
{"title":"Auto-generating Virtual Human Behavior by Understanding User Contexts","authors":"Hanseob Kim, Ghazanfar Ali, Seungwon Kim, G. Kim, Jae-In Hwang","doi":"10.1109/VRW52623.2021.00178","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00178","url":null,"abstract":"Virtual humans are most natural and effective when it can act out and animate verbal/gestural actions. One popular method to realize this is to infer the actions from predefined phrases. This research aims to provide a more flexible method to activate various behaviors straight from natural conversations. Our approach uses BERT as the backbone for natural language understanding and, on top of it, a jointly learned sentence classifier (SC) and entity classifier (EC). The SC classifies the input into conversation or action, and EC extracts the entities for the action. The pilot study has shown promising results with high perceived naturalness and positive experiences.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132203544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GazeTance Guidance: Gaze and Distance-Based Content Presentation for Virtual Museum","authors":"Haopeng Lu, Huiwen Ren, Yanan Feng, Shanshe Wang, Siwei Ma, Wen Gao","doi":"10.1109/VRW52623.2021.00113","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00113","url":null,"abstract":"The increasing popularity of virtual reality provides new opportunities for online exhibitions, especially for fragile artwork in museums. However, the limited guidance approaches of virtual museums might hinder the acquisition of knowledge. In this paper, a novel interaction concept is proposed named GazeTance Guidance, which leverages the user’s gaze point and interact-distance towards the region of interest (ROI) and helps users appreciate artworks more organized. We conducted a series of comprehension tasks on several long scroll paintings and verified the necessity of guidance. Comparing with no-guidance mechanisms, participants showed a better memory performance on the ROIs without compromising presence and user experience.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129236786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Embedded Virtual Experiment Environment System for Reality Classroom","authors":"Yanxiang Zhang, Yutong Zi, Jiayu Wang","doi":"10.1109/VRW52623.2021.00136","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00136","url":null,"abstract":"We designed a low-cost augmented virtuality system based on the Oculus Quest to embed VR in classrooms. To build the system, we measure the size and position of tables in the classroom, make a proxy model in Unity, and then embed the proxy model to seamlessly within the real classroom. In this system, schoolchildren can realize collaborative experiments in ideal conditions or some hard-to-reach scenes. This system's contribution is: (1) By manually adding obstacles, it makes up for most VR systems that can only delimit the area but cannot identify obstacles. (2) It cleverly reuses tables and makes them play the role of anti-collision, workbench, and joystick placement. (3) It expands the available area of VR in complex environments.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"393 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126746281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marc Barnes, Dennis Briddigkeit, Tim Mayer, Hannah Paulmann, E. Langbehn
{"title":"A Seamless Natural Locomotion Concept for VR Adventure Game \"The Amusement\"","authors":"Marc Barnes, Dennis Briddigkeit, Tim Mayer, Hannah Paulmann, E. Langbehn","doi":"10.1109/VRW52623.2021.00269","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00269","url":null,"abstract":"The Amusement is a story-driven VR adventure about an old abandoned amusement park in the 1920s. The main game mechanics are exploration and discovery. Therefore, one of the most unique points of this game is the locomotion concept. It was designed with the following goal: The whole park can be explored entirely with natural locomotion techniques to create a seamless fusion of exploration and discovery. To achieve this, The Amusement leverages a mix of different techniques: 1) Redirected Walking techniques [1] such as rotation gains [2] and impossible spaces [3], 2) Passive movement: lifts, platforms, vehicles, etc., 3) Indirect movement: for example climbing, pulling a rope to move a raft, moving a handle of a railroad trolley, etc. Furthermore, techniques are extended by or coupled with physical interactions or movements like crawling, jumping or sliding around corners. A seamless integration of all these techniques enables the player to walk around freely in the recommended room-scale play space of 2m X 2m, while exploring a potentially infinite virtual space.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122985473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparison of Single and Multi-View IR image-based AR Glasses Pose Estimation Approaches","authors":"Ahmet Firintepe, A. Pagani, D. Stricker","doi":"10.1109/VRW52623.2021.00168","DOIUrl":"https://doi.org/10.1109/VRW52623.2021.00168","url":null,"abstract":"In this paper, we present a study on single and multi-view image-based AR glasses pose estimation with two novel methods. The first approach is named GlassPose and is a VGG-based network. The second approach GlassPoseRN is based on ResNet18. We train and evaluate the two custom developed glasses pose estimation networks with one, two and three input images on the HMDPose dataset. We achieve errors as low as 0.10° and 0.90mm on average on all axes for orientation and translation. For both networks, we observe minimal improvements in position estimation with more input views.","PeriodicalId":256204,"journal":{"name":"2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115620911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}