{"title":"IMPReSS: Improved Multi-Touch Progressive Refinement Selection Strategy","authors":"Elaheh Samimi, Robert J. Teather","doi":"10.1109/VRW55335.2022.00069","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00069","url":null,"abstract":"We developed a progressive refinement technique for VR object selection using a smartphone as a controller. Our technique, IMPReSS, combines conventional progressive refinement selection with the marking menu-based CountMarks. CountMarks uses multi-finger touch gestures to “short-circuit” multi-item marking menus, allowing users to indicate a specific item in a sub-menu by pressing a specific number of fingers on the screen while swiping in the direction of the desired menu. IMPReSS uses this idea to reduce the number of refinements necessary during progressive refinement selection. We compared our technique with SQUAD and a multi-touch technique in terms of search time, selection time, and accuracy. The results showed that IMPReSS was both the fastest and most accurate of the techniques, likely due to a combination of tactile feedback from the smartphone screen and the advantage of fewer refinement steps.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115148859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Development of a Common Factors Based Virtual Reality Therapy System for Remote Psychotherapy Applications","authors":"Christopher Tacca, B. Kerr, Elizabeth Friis","doi":"10.1109/VRW55335.2022.00100","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00100","url":null,"abstract":"In person psychotherapy can be inaccessible to many, particularly isolated populations. Remote psychotherapy has been proposed as a more accessible alternative. However, certain limitations in the current solutions including providing a restorative therapeutic environment and therapeutic alliance have meant that many other people are left behind and do not receive adequate treatment. A common factors based VR and EEG remote psychotherapy system can make remote psychotherapy more accessible and effective for people in which current options are not sufficient.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115661642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flick Typing: Toward A New XR Text Input System Based on 3D Gestures and Machine Learning","authors":"Tian Yang, Powen Yao, Michael Zyda","doi":"10.1109/VRW55335.2022.00295","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00295","url":null,"abstract":"We propose a new text entry input method in Extended Reality that we call Flick Typing. Flick Typing utilizes the user's knowledge of a QWERTY keyboard layout, but does not explicitly provide visualization of the keys, and is agnostic to user posture or keyboard position. To type with Flick Typing, users will move their controller to where they think the target key is with respect to the controller's starting position and orientation, often with a simple flick of their wrists. Machine learning model is trained and used to adapt to the user's mental map of the keys in 3D space.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116714246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rereading the Narrative Paradox for Virtual Reality Theatre","authors":"Xiaotian Jiang, Xueni Pan, J. Freeman","doi":"10.1109/VRW55335.2022.00299","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00299","url":null,"abstract":"We examined several key issues around audience autonomy in VR theatre. Informed by a literature review and a qualitative user study (grounded theory), we developed a conceptual model that enables a quantifiable evaluation of audience experience in VR theatre. A second user study inspired by the ‘narrative paradox’, investigates the relationship between spatial exploration and narrative comprehension in two VR performances. Our results show that although navigation distracted the participants from following the full story, they were more engaged, attached and had a better overall experience as a result of their freedom to move and interact.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116785415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3Dify: Extruding Common 2D Charts with Timeseries Data","authors":"R. Brath, Martin Matusiak","doi":"10.1109/VRW55335.2022.00154","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00154","url":null,"abstract":"3D charts are not common in financial services. We review chart use in practice. We create 3D financial visualizations starting with 2D charts used extensively in financial services, then extend into the third dimension with timeseries data. We embed the 2D view into the the 3D scene; constrain interaction and add depth cues to facilitate comprehension. Usage and extensions indicate success.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120850206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunho Choi, Hyeonchang Jeon, Sungha Lee, Isaac Han, Yiyue Luo, Seungjun Kim, W. Matusik, Kyung-Joong Kim
{"title":"Seamless-walk: Novel Natural Virtual Reality Locomotion Method with a High-Resolution Tactile Sensor","authors":"Yunho Choi, Hyeonchang Jeon, Sungha Lee, Isaac Han, Yiyue Luo, Seungjun Kim, W. Matusik, Kyung-Joong Kim","doi":"10.1109/VRW55335.2022.00199","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00199","url":null,"abstract":"Natural movement is a challenging problem in virtual reality locomotion. However, existing foot-based locomotion methods lack naturalness due to physical limitations caused by wearing equipment. Therefore, in this study, we propose Seamless-walk, a novel virtual reality (VR) locomotion technique to enable locomotion in the virtual environment by walking on a high-resolution tactile carpet. The proposed Seamless-walk moves the user's virtual character by extracting the users' walking speed and orientation from raw tactile signals using machine learning techniques. We demonstrate that the proposed Seamless-walk is more natural and effective than existing VR locomotion methods by comparing them in VR game-playing tasks.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121065398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lars Carius, Christian Eichhorn, D. A. Plecher, G. Klinker
{"title":"Cloud-Based Cross-Platform Collaborative AR in Flutter","authors":"Lars Carius, Christian Eichhorn, D. A. Plecher, G. Klinker","doi":"10.1109/VRW55335.2022.00192","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00192","url":null,"abstract":"Augmented Reality (AR) has progressed tremendously over the past years, enabling the creation of collaborative experiences and real-time environment tracking on smartphones. The strong tendency towards game engine-based approaches, however, has made it difficult for many businesses to utilize the potential of this technology. We present a novel collaborative AR framework aimed at lowering the entry barriers and operating expenses of AR applications. Our framework includes a cross-platform and cloud-based Flutter plugin combined with a web-based content management system allowing non-technical staff to take over operational tasks such as providing 3D models or moderating community annotations. To provide a state-of-the-art feature set, the AR Flutter plugin builds upon ARCore on Android and ARKit on iOS and unifies the two frameworks using an abstraction layer written in Dart. We show that the cross-platform AR Flutter plugin performs on the same level as native AR frameworks in terms of both application-level metrics and tracking-level qualities such as SLAM keyframes per second and area of tracked planes. Our contribution closes a gap in today's technological landscape by providing an AR framework seamlessly integrating with the familiar development process of cross-platform apps. With the accompanying content management system, AR can be used as a tool to achieve business objectives. The AR Flutter plugin is fully open-source, the code can be found at: https://github.com/CariusLars/ar_flutter_plugin.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124905805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Time Reversal Symmetry Based Real-time Optical Motion Capture Missing Marker Recovery Method","authors":"Dongdong Weng, Yihan Wang, Dong Li","doi":"10.1109/VRW55335.2022.00237","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00237","url":null,"abstract":"This paper proposes a deep learning model based on time reversal symmetry for real-time recovery of continuous missing marker sequences in optical motion capture. This paper firstly uses time reversal symmetry of human motion as a constraint of the model. BiLSTM is used to describe the constraint and extract the bidirectional spatiotemporal features. This paper proposes a weight position loss function for model training, which describes the effect of different joints on the pose. Compared with the existing methods, the experimental results show that the proposed method has higher accuracy and good real-time performance.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123574798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[DC] Leveraging AR Cues towards New Navigation Assistant Paradigm","authors":"Yu Zhao","doi":"10.1109/VRW55335.2022.00316","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00316","url":null,"abstract":"Extensive research has shown that the knowledge required to navigate an unfamiliar environment has been greatly reduced as many of the planning and decision-making tasks can be supplanted by the use of automated navigation systems. The progress in augmented reality (AR), particularly AR head-mounted displays (HMDs) foreshadows the prevalence of such devices as computational platforms of the future. AR displays open a new design space on navigational aids for solving this problem by superimposing virtual imagery over the environment. This dissertation abstract proposes a research agenda that investigates how to effectively leverage AR cues to help both navigation efficiency and spatial learning in walking scenarios.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114299457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using External Video to Attack Behavior-Based Security Mechanisms in Virtual Reality (VR)","authors":"Robert Miller, N. Banerjee, Sean Banerjee","doi":"10.1109/VRW55335.2022.00193","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00193","url":null,"abstract":"As virtual reality (VR) systems become prevalent in domains such as healthcare and education, sensitive data must be protected from attacks. Password-based techniques are circumvented once an attacker gains access to the user's credentials. Behavior-based approaches are susceptible to attacks from malicious users who mimic the actions of a genuine user or gain access to the 3D trajectories. We investigate a novel attack where a malicious user obtains a 2D video of genuine user interacting in VR. We demonstrate that an attacker can extract 2D motion trajectories from the video and match them to 3D enrollment trajectories to defeat behavior-based VR security.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122168558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}