2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)最新文献

筛选
英文 中文
Preliminary analysis of effective assistance timing for iterative visual search tasks using gaze-based visual cognition estimation 基于注视的视觉认知估计的迭代视觉搜索任务有效辅助时机初步分析
Syunsuke Yoshida, Makoto Sei, A. Utsumi, H. Yamazoe
{"title":"Preliminary analysis of effective assistance timing for iterative visual search tasks using gaze-based visual cognition estimation","authors":"Syunsuke Yoshida, Makoto Sei, A. Utsumi, H. Yamazoe","doi":"10.1109/VRW55335.2022.00179","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00179","url":null,"abstract":"In this paper, focusing on whether a person has visually recognized a target (visual cognition, VC) in iterative visual-search tasks, we propose an efficient assistance method based on the VC. In the proposed method, we first estimate the participant's VC of the target in the previous task. We then determine the target for the next task based on the VC and start to guide the participant's attention to the target for the next task at the VC timing. By initiating the guidance from the timing of the previous target's VC, we can guide attention at an earlier time and achieve efficient attention guidance. The preliminary experimental results showed that VC-based assistance improves task performance.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134286306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR Training: The Unused Opportunity to Save Lives During a Pandemic VR培训:大流行期间未使用的拯救生命的机会
Maximilian Rettinger, G. Rigoll, C. Schmaderer
{"title":"VR Training: The Unused Opportunity to Save Lives During a Pandemic","authors":"Maximilian Rettinger, G. Rigoll, C. Schmaderer","doi":"10.1109/VRW55335.2022.00092","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00092","url":null,"abstract":"When on life support, the patients' lives not only depend on the availability of the medical devices but also on the staff's expertise to use them. With the example of ECMO devices, which were highly demanded during the COVID-19 pandemic but rarely used until then, we developed a VR training for priming an ECMO to provide the required expertise in a standardized and simple way on a global scale. This paper presents the development of VR training with feedback from medical and technical experts.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131651042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
FUSEDAR: Adaptive Environment Lighting Reconstruction for Visually Coherent Mobile AR Rendering FUSEDAR:用于视觉连贯移动AR渲染的自适应环境照明重建
Yiqin Zhao, Tian Guo
{"title":"FUSEDAR: Adaptive Environment Lighting Reconstruction for Visually Coherent Mobile AR Rendering","authors":"Yiqin Zhao, Tian Guo","doi":"10.1109/VRW55335.2022.00137","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00137","url":null,"abstract":"Obtaining accurate omnidirectional environment lighting for high quality rendering in mobile augmented reality is challenging due to the practical limitation of mobile devices and the inherent spatial variance of lighting. In this paper, we present a novel adaptive environment lighting reconstruction method called FusedAR, which is designed from the outset to consider mobile characteristics, e.g., by exploiting mobile user natural behaviors of pointing the camera sensors perpendicular to the observation-rendering direction. Our initial evaluation shows that FusedAR achieves better rendering effects compared to using a recent deep learning-based AR lighting estimation system [8] and environment lighting captured by 360° cameras.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"49 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132934444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
From 2D to 3D: Facilitating Single-Finger Mid-Air Typing on Virtual Keyboards with Probabilistic Touch Modeling 从2D到3D:促进单指空中打字与概率触摸建模虚拟键盘
Xin Yi, Chen Liang, Haozhan Chen, Jiuxu Song, Chun Yu, Yuanchun Shi
{"title":"From 2D to 3D: Facilitating Single-Finger Mid-Air Typing on Virtual Keyboards with Probabilistic Touch Modeling","authors":"Xin Yi, Chen Liang, Haozhan Chen, Jiuxu Song, Chun Yu, Yuanchun Shi","doi":"10.1109/VRW55335.2022.00198","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00198","url":null,"abstract":"Mid-air text entry on virtual keyboards suffers from the lack of tactile feedback, bringing challenges to both tap detection and input prediction. In this poster, we demonstrated the feasibility of efficient single-finger typing in mid-air through probabilistic touch modeling. We first collected users' typing data on different sizes of virtual keyboards. Based on analyzing the data, we derived an input prediction algorithm that incorporated probabilistic touch detection and elastic probabilistic decoding. In the evaluation study where the participants performed real text entry tasks with this technique, they reached a pick-up single-finger typing speed of 24.0 WPM with 2.8% word-level error rate.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131034368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
[DC] Designing and Optimizing Daily-wear Photophobic Smart Sunglasses [DC]避光智能太阳镜的设计与优化
Xiaodan Hu
{"title":"[DC] Designing and Optimizing Daily-wear Photophobic Smart Sunglasses","authors":"Xiaodan Hu","doi":"10.1109/VRW55335.2022.00318","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00318","url":null,"abstract":"Photophobia, also known as light sensitivity, is a condition in which there is a fear of light. Traditional sunglasses and tinted glasses typically worn by individuals with photophobia only provide linear dimming, leading to difficulty to see the contents in the dark region of a high-contrast environment (e.g., indoors at night). This paper presents a smart dimming sunglass that uses a spatial light modular (SLM) to flexibly dim the user's field of view based on scene detection from a high dynamic range (HDR) camera. To address the problem that the occlusion mask displayed on the SLM becomes blurred due to out-of-focus and thus cannot provide a sufficient modulation when viewing a distant object, I design an optimization model to dilate the occlusion mask appropriately. The optimized dimming effect is verified by the camera and preliminary test by real users to be able to filter the desired amount of incoming light through a blurred mask.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131201131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Movement Augmentation in Virtual Reality: Impact on Sense of Agency Measured by Subjective Responses and Electroencephalography 虚拟现实中的运动增强:对主观反应和脑电图测量的代理感的影响
Liu Wang, Mengjie Huang, Chengxuan Qin, Yiqi Wang, Rui Yang
{"title":"Movement Augmentation in Virtual Reality: Impact on Sense of Agency Measured by Subjective Responses and Electroencephalography","authors":"Liu Wang, Mengjie Huang, Chengxuan Qin, Yiqi Wang, Rui Yang","doi":"10.1109/VRW55335.2022.00267","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00267","url":null,"abstract":"Virtual movement augmentation, which refers to the visual amplification of remapped movement, shows potential to be applied in motion-related virtual reality programs. Sense of agency (SoA), which measures the user's feeling of control in their action, has not been fully investigated for augmented movement. This study investigated the effect of augmented movement at three different levels (baseline, medium, and high) on users' SoA using both subjective responses and electroencephalography (EEG). Results show that SoA can be boosted slightly at medium augmentation level but drops at high level. The augmented virtual movement only helps to enhance SoA to a certain extent.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131259851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Designing VR training systems for children with attention deficit hyperactivity disorder (ADHD) 儿童注意缺陷多动障碍(ADHD)虚拟现实训练系统设计
Ho-Yan Kwan, Lang Lin, Conor Fahy, J. Shell, Shiqi Pang, Yongkang Xing
{"title":"Designing VR training systems for children with attention deficit hyperactivity disorder (ADHD)","authors":"Ho-Yan Kwan, Lang Lin, Conor Fahy, J. Shell, Shiqi Pang, Yongkang Xing","doi":"10.1109/VRW55335.2022.00030","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00030","url":null,"abstract":"Attention-deficit hyperactivity disorder (ADHD) is a common mental disorder in childhood, with a reported 5% global prevalence rate. The project uses Virtual Reality (VR) technology to help children improve their concentration in order to mitigate some of the various deficiencies in existing rehabilitation methods. The research aims to apply the interactive features of VR technologies and to combine them with psychological rehabilitation training technology. The research also uses Electroencephalography (EEG) brain electricity image technology for real-time information feedback. The mobile application can receive the EEG data with visualization to assist medical staff and patients' families in evaluating the treatment. The research designs a therapy training system without physical space restriction. It is easy to deploy and can be a highly customizable rehabilitation process.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133603783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VR-based Context Priming to Increase Student Engagement and Academic Performance 基于vr的情境启动提高学生参与度和学习成绩
Daniel Hawes, A. Arya
{"title":"VR-based Context Priming to Increase Student Engagement and Academic Performance","authors":"Daniel Hawes, A. Arya","doi":"10.1109/VRW55335.2022.00196","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00196","url":null,"abstract":"Research suggests that virtual environments can be designed to increase engagement and performance with many cognitive tasks. This paper compares the efficacy of specifically designed 3D environments intended to prime these effects within Virtual Reality (VR). A 27-minute seminar “The Creative Process of Making an Animated Movie” was presented to 51 participants within three VR learning spaces: two prime and one no-prime. The prime conditions included two situated learning environments; an animation studio and a theatre with animation artifacts vs. the no-prime: theatre without artifacts. Increased academic performance was observed in both prime conditions. A UX survey was also completed.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131411488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
View-Adaptive Asymmetric Image Detail Enhancement for 360-degree Stereoscopic VR Content 360度立体VR内容的自适应非对称图像细节增强
Kin-Ming Wong
{"title":"View-Adaptive Asymmetric Image Detail Enhancement for 360-degree Stereoscopic VR Content","authors":"Kin-Ming Wong","doi":"10.1109/VRW55335.2022.00012","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00012","url":null,"abstract":"We present a simple VR-specific image detail enhancement method that improves the viewing experience of 360-degree stereoscopic photographed VR contents. By exploiting the fusion characteristics of binocular vision, we propose an asymmetric process that applies detail enhancement to one single image channel only. Our method can dynamically apply the enhancement in a view-adaptive fashion in real-time on most low-cost standalone VR headsets. We discuss the benefits of this method with respect to authoring possibilities, storage and bandwidth issues of photographed VR contents.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127050941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ARTFM: Augmented Reality Visualization of Tool Functionality Manuals in Operating Rooms ARTFM:手术室工具功能手册的增强现实可视化
Constantin Kleinbeck, Hannah Schieber, S. Andress, C. Krautz, Daniel Roth
{"title":"ARTFM: Augmented Reality Visualization of Tool Functionality Manuals in Operating Rooms","authors":"Constantin Kleinbeck, Hannah Schieber, S. Andress, C. Krautz, Daniel Roth","doi":"10.1109/VRW55335.2022.00219","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00219","url":null,"abstract":"Error-free surgical procedures are crucial for a patient's health. However, with the increasing complexity and variety of surgical instruments, it is difficult for clinical staff to acquire detailed assembly and usage knowledge leading to errors in process and preparation steps. Yet, the gold standard in retrieving necessary information when problems occur is to get the paperbased manual. Reading through the necessary instructions is time-consuming and decreases care quality. We propose ARTFM, a process integrated manual, highlighting the correct parts needed, their location, and step-by-step instructions to combine the instrument using an augmented reality head-mounted display.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133752995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信