2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)最新文献

筛选
英文 中文
IMPReSS: Improved Multi-Touch Progressive Refinement Selection Strategy IMPReSS:改进的多点触控逐步优化选择策略
Elaheh Samimi, Robert J. Teather
{"title":"IMPReSS: Improved Multi-Touch Progressive Refinement Selection Strategy","authors":"Elaheh Samimi, Robert J. Teather","doi":"10.1109/VRW55335.2022.00069","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00069","url":null,"abstract":"We developed a progressive refinement technique for VR object selection using a smartphone as a controller. Our technique, IMPReSS, combines conventional progressive refinement selection with the marking menu-based CountMarks. CountMarks uses multi-finger touch gestures to “short-circuit” multi-item marking menus, allowing users to indicate a specific item in a sub-menu by pressing a specific number of fingers on the screen while swiping in the direction of the desired menu. IMPReSS uses this idea to reduce the number of refinements necessary during progressive refinement selection. We compared our technique with SQUAD and a multi-touch technique in terms of search time, selection time, and accuracy. The results showed that IMPReSS was both the fastest and most accurate of the techniques, likely due to a combination of tactile feedback from the smartphone screen and the advantage of fewer refinement steps.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115148859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Development of a Common Factors Based Virtual Reality Therapy System for Remote Psychotherapy Applications 基于共同因素的虚拟现实远程心理治疗系统的开发
Christopher Tacca, B. Kerr, Elizabeth Friis
{"title":"The Development of a Common Factors Based Virtual Reality Therapy System for Remote Psychotherapy Applications","authors":"Christopher Tacca, B. Kerr, Elizabeth Friis","doi":"10.1109/VRW55335.2022.00100","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00100","url":null,"abstract":"In person psychotherapy can be inaccessible to many, particularly isolated populations. Remote psychotherapy has been proposed as a more accessible alternative. However, certain limitations in the current solutions including providing a restorative therapeutic environment and therapeutic alliance have meant that many other people are left behind and do not receive adequate treatment. A common factors based VR and EEG remote psychotherapy system can make remote psychotherapy more accessible and effective for people in which current options are not sufficient.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115661642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flick Typing: Toward A New XR Text Input System Based on 3D Gestures and Machine Learning 轻弹打字:迈向基于3D手势和机器学习的新XR文本输入系统
Tian Yang, Powen Yao, Michael Zyda
{"title":"Flick Typing: Toward A New XR Text Input System Based on 3D Gestures and Machine Learning","authors":"Tian Yang, Powen Yao, Michael Zyda","doi":"10.1109/VRW55335.2022.00295","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00295","url":null,"abstract":"We propose a new text entry input method in Extended Reality that we call Flick Typing. Flick Typing utilizes the user's knowledge of a QWERTY keyboard layout, but does not explicitly provide visualization of the keys, and is agnostic to user posture or keyboard position. To type with Flick Typing, users will move their controller to where they think the target key is with respect to the controller's starting position and orientation, often with a simple flick of their wrists. Machine learning model is trained and used to adapt to the user's mental map of the keys in 3D space.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116714246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rereading the Narrative Paradox for Virtual Reality Theatre 重新解读虚拟现实剧场的叙事悖论
Xiaotian Jiang, Xueni Pan, J. Freeman
{"title":"Rereading the Narrative Paradox for Virtual Reality Theatre","authors":"Xiaotian Jiang, Xueni Pan, J. Freeman","doi":"10.1109/VRW55335.2022.00299","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00299","url":null,"abstract":"We examined several key issues around audience autonomy in VR theatre. Informed by a literature review and a qualitative user study (grounded theory), we developed a conceptual model that enables a quantifiable evaluation of audience experience in VR theatre. A second user study inspired by the ‘narrative paradox’, investigates the relationship between spatial exploration and narrative comprehension in two VR performances. Our results show that although navigation distracted the participants from following the full story, they were more engaged, attached and had a better overall experience as a result of their freedom to move and interact.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116785415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3Dify: Extruding Common 2D Charts with Timeseries Data 3Dify:用时间序列数据挤出常见的2D图表
R. Brath, Martin Matusiak
{"title":"3Dify: Extruding Common 2D Charts with Timeseries Data","authors":"R. Brath, Martin Matusiak","doi":"10.1109/VRW55335.2022.00154","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00154","url":null,"abstract":"3D charts are not common in financial services. We review chart use in practice. We create 3D financial visualizations starting with 2D charts used extensively in financial services, then extend into the third dimension with timeseries data. We embed the 2D view into the the 3D scene; constrain interaction and add depth cues to facilitate comprehension. Usage and extensions indicate success.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120850206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Seamless-walk: Novel Natural Virtual Reality Locomotion Method with a High-Resolution Tactile Sensor 无缝行走:采用高分辨率触觉传感器的新型自然虚拟现实运动方法
Yunho Choi, Hyeonchang Jeon, Sungha Lee, Isaac Han, Yiyue Luo, Seungjun Kim, W. Matusik, Kyung-Joong Kim
{"title":"Seamless-walk: Novel Natural Virtual Reality Locomotion Method with a High-Resolution Tactile Sensor","authors":"Yunho Choi, Hyeonchang Jeon, Sungha Lee, Isaac Han, Yiyue Luo, Seungjun Kim, W. Matusik, Kyung-Joong Kim","doi":"10.1109/VRW55335.2022.00199","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00199","url":null,"abstract":"Natural movement is a challenging problem in virtual reality locomotion. However, existing foot-based locomotion methods lack naturalness due to physical limitations caused by wearing equipment. Therefore, in this study, we propose Seamless-walk, a novel virtual reality (VR) locomotion technique to enable locomotion in the virtual environment by walking on a high-resolution tactile carpet. The proposed Seamless-walk moves the user's virtual character by extracting the users' walking speed and orientation from raw tactile signals using machine learning techniques. We demonstrate that the proposed Seamless-walk is more natural and effective than existing VR locomotion methods by comparing them in VR game-playing tasks.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121065398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud-Based Cross-Platform Collaborative AR in Flutter 基于云的跨平台协作AR在Flutter
Lars Carius, Christian Eichhorn, D. A. Plecher, G. Klinker
{"title":"Cloud-Based Cross-Platform Collaborative AR in Flutter","authors":"Lars Carius, Christian Eichhorn, D. A. Plecher, G. Klinker","doi":"10.1109/VRW55335.2022.00192","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00192","url":null,"abstract":"Augmented Reality (AR) has progressed tremendously over the past years, enabling the creation of collaborative experiences and real-time environment tracking on smartphones. The strong tendency towards game engine-based approaches, however, has made it difficult for many businesses to utilize the potential of this technology. We present a novel collaborative AR framework aimed at lowering the entry barriers and operating expenses of AR applications. Our framework includes a cross-platform and cloud-based Flutter plugin combined with a web-based content management system allowing non-technical staff to take over operational tasks such as providing 3D models or moderating community annotations. To provide a state-of-the-art feature set, the AR Flutter plugin builds upon ARCore on Android and ARKit on iOS and unifies the two frameworks using an abstraction layer written in Dart. We show that the cross-platform AR Flutter plugin performs on the same level as native AR frameworks in terms of both application-level metrics and tracking-level qualities such as SLAM keyframes per second and area of tracked planes. Our contribution closes a gap in today's technological landscape by providing an AR framework seamlessly integrating with the familiar development process of cross-platform apps. With the accompanying content management system, AR can be used as a tool to achieve business objectives. The AR Flutter plugin is fully open-source, the code can be found at: https://github.com/CariusLars/ar_flutter_plugin.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124905805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Time Reversal Symmetry Based Real-time Optical Motion Capture Missing Marker Recovery Method 一种基于时间反转对称的实时光学运动捕捉缺失标记恢复方法
Dongdong Weng, Yihan Wang, Dong Li
{"title":"A Time Reversal Symmetry Based Real-time Optical Motion Capture Missing Marker Recovery Method","authors":"Dongdong Weng, Yihan Wang, Dong Li","doi":"10.1109/VRW55335.2022.00237","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00237","url":null,"abstract":"This paper proposes a deep learning model based on time reversal symmetry for real-time recovery of continuous missing marker sequences in optical motion capture. This paper firstly uses time reversal symmetry of human motion as a constraint of the model. BiLSTM is used to describe the constraint and extract the bidirectional spatiotemporal features. This paper proposes a weight position loss function for model training, which describes the effect of different joints on the pose. Compared with the existing methods, the experimental results show that the proposed method has higher accuracy and good real-time performance.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123574798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[DC] Leveraging AR Cues towards New Navigation Assistant Paradigm [DC]利用AR线索实现新的导航助手范式
Yu Zhao
{"title":"[DC] Leveraging AR Cues towards New Navigation Assistant Paradigm","authors":"Yu Zhao","doi":"10.1109/VRW55335.2022.00316","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00316","url":null,"abstract":"Extensive research has shown that the knowledge required to navigate an unfamiliar environment has been greatly reduced as many of the planning and decision-making tasks can be supplanted by the use of automated navigation systems. The progress in augmented reality (AR), particularly AR head-mounted displays (HMDs) foreshadows the prevalence of such devices as computational platforms of the future. AR displays open a new design space on navigational aids for solving this problem by superimposing virtual imagery over the environment. This dissertation abstract proposes a research agenda that investigates how to effectively leverage AR cues to help both navigation efficiency and spatial learning in walking scenarios.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114299457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using External Video to Attack Behavior-Based Security Mechanisms in Virtual Reality (VR) 利用外部视频攻击虚拟现实(VR)中基于行为的安全机制
Robert Miller, N. Banerjee, Sean Banerjee
{"title":"Using External Video to Attack Behavior-Based Security Mechanisms in Virtual Reality (VR)","authors":"Robert Miller, N. Banerjee, Sean Banerjee","doi":"10.1109/VRW55335.2022.00193","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00193","url":null,"abstract":"As virtual reality (VR) systems become prevalent in domains such as healthcare and education, sensitive data must be protected from attacks. Password-based techniques are circumvented once an attacker gains access to the user's credentials. Behavior-based approaches are susceptible to attacks from malicious users who mimic the actions of a genuine user or gain access to the 3D trajectories. We investigate a novel attack where a malicious user obtains a 2D video of genuine user interacting in VR. We demonstrate that an attacker can extract 2D motion trajectories from the video and match them to 3D enrollment trajectories to defeat behavior-based VR security.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122168558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信