Syunsuke Yoshida, Makoto Sei, A. Utsumi, H. Yamazoe
{"title":"Preliminary analysis of effective assistance timing for iterative visual search tasks using gaze-based visual cognition estimation","authors":"Syunsuke Yoshida, Makoto Sei, A. Utsumi, H. Yamazoe","doi":"10.1109/VRW55335.2022.00179","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00179","url":null,"abstract":"In this paper, focusing on whether a person has visually recognized a target (visual cognition, VC) in iterative visual-search tasks, we propose an efficient assistance method based on the VC. In the proposed method, we first estimate the participant's VC of the target in the previous task. We then determine the target for the next task based on the VC and start to guide the participant's attention to the target for the next task at the VC timing. By initiating the guidance from the timing of the previous target's VC, we can guide attention at an earlier time and achieve efficient attention guidance. The preliminary experimental results showed that VC-based assistance improves task performance.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134286306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VR Training: The Unused Opportunity to Save Lives During a Pandemic","authors":"Maximilian Rettinger, G. Rigoll, C. Schmaderer","doi":"10.1109/VRW55335.2022.00092","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00092","url":null,"abstract":"When on life support, the patients' lives not only depend on the availability of the medical devices but also on the staff's expertise to use them. With the example of ECMO devices, which were highly demanded during the COVID-19 pandemic but rarely used until then, we developed a VR training for priming an ECMO to provide the required expertise in a standardized and simple way on a global scale. This paper presents the development of VR training with feedback from medical and technical experts.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131651042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FUSEDAR: Adaptive Environment Lighting Reconstruction for Visually Coherent Mobile AR Rendering","authors":"Yiqin Zhao, Tian Guo","doi":"10.1109/VRW55335.2022.00137","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00137","url":null,"abstract":"Obtaining accurate omnidirectional environment lighting for high quality rendering in mobile augmented reality is challenging due to the practical limitation of mobile devices and the inherent spatial variance of lighting. In this paper, we present a novel adaptive environment lighting reconstruction method called FusedAR, which is designed from the outset to consider mobile characteristics, e.g., by exploiting mobile user natural behaviors of pointing the camera sensors perpendicular to the observation-rendering direction. Our initial evaluation shows that FusedAR achieves better rendering effects compared to using a recent deep learning-based AR lighting estimation system [8] and environment lighting captured by 360° cameras.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"49 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132934444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Yi, Chen Liang, Haozhan Chen, Jiuxu Song, Chun Yu, Yuanchun Shi
{"title":"From 2D to 3D: Facilitating Single-Finger Mid-Air Typing on Virtual Keyboards with Probabilistic Touch Modeling","authors":"Xin Yi, Chen Liang, Haozhan Chen, Jiuxu Song, Chun Yu, Yuanchun Shi","doi":"10.1109/VRW55335.2022.00198","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00198","url":null,"abstract":"Mid-air text entry on virtual keyboards suffers from the lack of tactile feedback, bringing challenges to both tap detection and input prediction. In this poster, we demonstrated the feasibility of efficient single-finger typing in mid-air through probabilistic touch modeling. We first collected users' typing data on different sizes of virtual keyboards. Based on analyzing the data, we derived an input prediction algorithm that incorporated probabilistic touch detection and elastic probabilistic decoding. In the evaluation study where the participants performed real text entry tasks with this technique, they reached a pick-up single-finger typing speed of 24.0 WPM with 2.8% word-level error rate.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131034368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[DC] Designing and Optimizing Daily-wear Photophobic Smart Sunglasses","authors":"Xiaodan Hu","doi":"10.1109/VRW55335.2022.00318","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00318","url":null,"abstract":"Photophobia, also known as light sensitivity, is a condition in which there is a fear of light. Traditional sunglasses and tinted glasses typically worn by individuals with photophobia only provide linear dimming, leading to difficulty to see the contents in the dark region of a high-contrast environment (e.g., indoors at night). This paper presents a smart dimming sunglass that uses a spatial light modular (SLM) to flexibly dim the user's field of view based on scene detection from a high dynamic range (HDR) camera. To address the problem that the occlusion mask displayed on the SLM becomes blurred due to out-of-focus and thus cannot provide a sufficient modulation when viewing a distant object, I design an optimization model to dilate the occlusion mask appropriately. The optimized dimming effect is verified by the camera and preliminary test by real users to be able to filter the desired amount of incoming light through a blurred mask.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131201131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liu Wang, Mengjie Huang, Chengxuan Qin, Yiqi Wang, Rui Yang
{"title":"Movement Augmentation in Virtual Reality: Impact on Sense of Agency Measured by Subjective Responses and Electroencephalography","authors":"Liu Wang, Mengjie Huang, Chengxuan Qin, Yiqi Wang, Rui Yang","doi":"10.1109/VRW55335.2022.00267","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00267","url":null,"abstract":"Virtual movement augmentation, which refers to the visual amplification of remapped movement, shows potential to be applied in motion-related virtual reality programs. Sense of agency (SoA), which measures the user's feeling of control in their action, has not been fully investigated for augmented movement. This study investigated the effect of augmented movement at three different levels (baseline, medium, and high) on users' SoA using both subjective responses and electroencephalography (EEG). Results show that SoA can be boosted slightly at medium augmentation level but drops at high level. The augmented virtual movement only helps to enhance SoA to a certain extent.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131259851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ho-Yan Kwan, Lang Lin, Conor Fahy, J. Shell, Shiqi Pang, Yongkang Xing
{"title":"Designing VR training systems for children with attention deficit hyperactivity disorder (ADHD)","authors":"Ho-Yan Kwan, Lang Lin, Conor Fahy, J. Shell, Shiqi Pang, Yongkang Xing","doi":"10.1109/VRW55335.2022.00030","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00030","url":null,"abstract":"Attention-deficit hyperactivity disorder (ADHD) is a common mental disorder in childhood, with a reported 5% global prevalence rate. The project uses Virtual Reality (VR) technology to help children improve their concentration in order to mitigate some of the various deficiencies in existing rehabilitation methods. The research aims to apply the interactive features of VR technologies and to combine them with psychological rehabilitation training technology. The research also uses Electroencephalography (EEG) brain electricity image technology for real-time information feedback. The mobile application can receive the EEG data with visualization to assist medical staff and patients' families in evaluating the treatment. The research designs a therapy training system without physical space restriction. It is easy to deploy and can be a highly customizable rehabilitation process.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133603783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VR-based Context Priming to Increase Student Engagement and Academic Performance","authors":"Daniel Hawes, A. Arya","doi":"10.1109/VRW55335.2022.00196","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00196","url":null,"abstract":"Research suggests that virtual environments can be designed to increase engagement and performance with many cognitive tasks. This paper compares the efficacy of specifically designed 3D environments intended to prime these effects within Virtual Reality (VR). A 27-minute seminar “The Creative Process of Making an Animated Movie” was presented to 51 participants within three VR learning spaces: two prime and one no-prime. The prime conditions included two situated learning environments; an animation studio and a theatre with animation artifacts vs. the no-prime: theatre without artifacts. Increased academic performance was observed in both prime conditions. A UX survey was also completed.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131411488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"View-Adaptive Asymmetric Image Detail Enhancement for 360-degree Stereoscopic VR Content","authors":"Kin-Ming Wong","doi":"10.1109/VRW55335.2022.00012","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00012","url":null,"abstract":"We present a simple VR-specific image detail enhancement method that improves the viewing experience of 360-degree stereoscopic photographed VR contents. By exploiting the fusion characteristics of binocular vision, we propose an asymmetric process that applies detail enhancement to one single image channel only. Our method can dynamically apply the enhancement in a view-adaptive fashion in real-time on most low-cost standalone VR headsets. We discuss the benefits of this method with respect to authoring possibilities, storage and bandwidth issues of photographed VR contents.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127050941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Constantin Kleinbeck, Hannah Schieber, S. Andress, C. Krautz, Daniel Roth
{"title":"ARTFM: Augmented Reality Visualization of Tool Functionality Manuals in Operating Rooms","authors":"Constantin Kleinbeck, Hannah Schieber, S. Andress, C. Krautz, Daniel Roth","doi":"10.1109/VRW55335.2022.00219","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00219","url":null,"abstract":"Error-free surgical procedures are crucial for a patient's health. However, with the increasing complexity and variety of surgical instruments, it is difficult for clinical staff to acquire detailed assembly and usage knowledge leading to errors in process and preparation steps. Yet, the gold standard in retrieving necessary information when problems occur is to get the paperbased manual. Reading through the necessary instructions is time-consuming and decreases care quality. We propose ARTFM, a process integrated manual, highlighting the correct parts needed, their location, and step-by-step instructions to combine the instrument using an augmented reality head-mounted display.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133752995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}