Fariba Mostajeran, Nikolaos Katzakis, Oscar Ariza, J. Freiwald, Frank Steinicke
{"title":"Welcoming a Holographic Virtual Coach for Balance Training at Home: Two Focus Groups with Older Adults","authors":"Fariba Mostajeran, Nikolaos Katzakis, Oscar Ariza, J. Freiwald, Frank Steinicke","doi":"10.1109/VR.2019.8797813","DOIUrl":"https://doi.org/10.1109/VR.2019.8797813","url":null,"abstract":"We report on findings from two focus groups for designing an application for balance training at home with an augmented reality virtual coach. Following a User-Centered Design approach, we performed two focus groups with older adults at the early stages of development. Focus group participants were shown a prototype using a Meta 2 head mounted display. Their movements were tracked using a Kinect 2. The virtual coach gave balance training instructions and demonstrated their correct performance. Results suggest that, given the trade-offs of traditional health care, older adults are positive towards using an AR coach for their balance training.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124012663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lucid Virtual/Augmented Reality (LVAR) Integrated with an Endoskeletal Robot Suit: StillSuit: A new framework for cognitive and physical interventions to support the ageing society","authors":"S. Oota, A. Murai, M. Mochimaru","doi":"10.1109/VR.2019.8798012","DOIUrl":"https://doi.org/10.1109/VR.2019.8798012","url":null,"abstract":"Japanese society is ageing ever faster. One of the most critical issues here is the shortage of working population, which is both cause and effect of the ‘super-ageing’ problem. We propose a new framework to ‘desterilize’ and utilize the elderly population as a new social resource. To sustain and hopefully enhance cognitive and physical functions of the elderly, we integrate cognitive and physical interventions by using high-fidelity (Hi-Fi) virtual/augmented reality (Lucid Virtual/Augmented Reality, LVAR) and an endoskeletal robot suit (StillSuit), respectively. LVAR has a physics-fidelity (Phy-Fi) digital-self for each LVAR space, and provides real time dynamic feedbacks to the user through StillSuit. Physical interventions are governed with a biologically relevant musculoskeletal model tailored for each user. To realize social cognition, furthermore, LVAR provides a social networking service with high quality 3D immersive experiences, which is able to be shared by remote users. With the fine-tuned interventions based on biological data of human and non-human animals, we prolong the healthy life expectancy of the elderly population to be the social resource, by which we will overcome the negative spiral of the super-ageing society.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128590357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mores Prachyabrued, Disathon Wattanadhirach, Richard B. Dudrow, Nat Krairojananan, P. Fuengfoo
{"title":"Toward Virtual Stress Inoculation Training of Prehospital Healthcare Personnel: A Stress-Inducing Environment Design and Investigation of an Emotional Connection Factor","authors":"Mores Prachyabrued, Disathon Wattanadhirach, Richard B. Dudrow, Nat Krairojananan, P. Fuengfoo","doi":"10.1109/VR.2019.8797705","DOIUrl":"https://doi.org/10.1109/VR.2019.8797705","url":null,"abstract":"Prehospital emergency healthcare personnel are responsible for finding, rescuing, and taking prehospital care of emergency patients. They are regularly exposed to stressful and traumatic lifesaving situations. The stress involved can impact their performance and can cause mental disorders in the long term. Stress inoculation training (SIT) inoculates individuals to potential stressors by letting them practice stress-coping skills in a controlled environment. Our work explores a story-driven stressful virtual environment design that can potentially be used for SIT in the new context of emergency healthcare personnel. Users role-play a first-time emergency worker on a rescue mission. The interactive storytelling is designed to engage users and elicit strong emotional responses, and follows the three-act structure commonly found in films and video games. To understand the stress-inducing and sense of presence qualities of our approach including the previously untested impact of an emotional connection factor, we conduct a between-subjects experiment involving 60 subjects. Results show that the approach successfully induces stress by increasing heart rate, galvanic skin response, and subjective stress rating. Questionnaire results indicate positive presence. One subject group engages in an initial friendly conversation with a virtual co-worker to establish an emotional connection. Another group includes no such conversation. The group with the emotional connection shows higher physiological stress levels and more occurrences of subject behaviors reflecting presence. Medical experts review our approach and suggest several applications that can benefit from its stress inducing ability.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114159844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katharina Krösl, Carmine Elvezio, M. Wimmer, Matthias Hürbe, Steven K. Feiner, Sonja G Karst
{"title":"ICthroughVR: Illuminating Cataracts through Virtual Reality","authors":"Katharina Krösl, Carmine Elvezio, M. Wimmer, Matthias Hürbe, Steven K. Feiner, Sonja G Karst","doi":"10.1109/VR.2019.8798239","DOIUrl":"https://doi.org/10.1109/VR.2019.8798239","url":null,"abstract":"Vision impairments, such as cataracts, affect the way many people interact with their environment, yet are rarely considered by architects and lighting designers because of a lack of design tools. To address this, we present a method to simulate vision impairments, in particular cataracts, graphically in virtual reality (VR), using eye tracking for gaze-dependent effects. We also conduct a VR user study to investigate the effects of lighting on visual perception for users with cataracts. In contrast to existing approaches, which mostly provide only simplified simulations and are primarily targeted at educational or demonstrative purposes, we account for the user's vision and the hardware constraints of the VR headset. This makes it possible to calibrate our cataract simulation to the same level of degraded vision for all participants. Our study results show that we are able to calibrate the vision of all our participants to a similar level of impairment, that maximum recognition distances for escape route signs with simulated cataracts are significantly smaller than without, and that luminaires visible in the field of view are perceived as especially disturbing due to the glare effects they create. In addition, the results show that our realistic simulation increases the understanding of how people with cataracts see and could therefore also be informative for health care personnel or relatives of cataract patients.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115311321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented Learning for Sports Using Wearable Head-worn and Wrist-worn Devices","authors":"H. Yeo, H. Koike, A. Quigley","doi":"10.1109/VR.2019.8798054","DOIUrl":"https://doi.org/10.1109/VR.2019.8798054","url":null,"abstract":"Novices can learn sports in a variety of ways ranging from guidance from an instructor to watching video tutorials. In each case, subsequent and repeated self-directed practice sessions are an essential step. However, during such self-directed practice, constant guidance and feedback is absent. As a result, the novices do not know if they are making mistake or if there are any areas for improvement. In this position paper, we propose using wearable devices to augment such self-directed practice sessions by providing augmented guidance and feedback. In particular, a head-worn display can provide real-time guidance whilst wrist-worn devices can provide real-time tracking and monitoring of various states. We envision this approach being applied to various sports, and in particular this is suitable for sports that utilize precise hand motion such as snooker, billiards, golf, archery, cricket, tennis and table tennis.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114249183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José L. Dorado, P. Figueroa, J. Chardonnet, F. Mérienne, J. T. Hernández
{"title":"Homing by triangle completion in consumer-oriented virtual reality environments","authors":"José L. Dorado, P. Figueroa, J. Chardonnet, F. Mérienne, J. T. Hernández","doi":"10.1109/VR.2019.8798059","DOIUrl":"https://doi.org/10.1109/VR.2019.8798059","url":null,"abstract":"Homing is a fundamental task which plays a vital role in spatial navigation. Its performance depends on the computation of a homing vector, where human beings can use simultaneously two different cognitive strategies: an online strategy based on the self-motion cues known as path integration (PI), and an offline strategy called piloting based on the spatial image of the path. Studies using virtual reality environments (VE) have shown that human being can perform homing tasks with acceptable performance. However, in these studies, subjects were able to walk naturally across large tracking areas, or researchers provided them with high-end large-immersive displays. Unfortunately, these configurations are far from current consumer-oriented devices, and very little is known about how their limitations can influence these cognitive processes. Using a triangle completion paradigm, we assessed homing tasks in two consumer-oriented displays (an HTC Vive and a GearVR) and two consumer-oriented interaction devices (a Virtuix Omni Treadmill and a Touchpad Control). Our results show that when locomotion is available (treadmill condition), there exist significant effects regarding display and path complexity. In contrast, when locomotion is restricted (touchpad condition), some effects on path complexity were found. Thus, some future research directions are therefore proposed.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114614856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro Rossa, Rafael Kenji Horota, A. M. Junior, A. S. Aires, E. Souza, Gabriel Lanzer Kannenberg, Jean Luca de Fraga, L. Santana, Demetrius Nunes Alves, Julia Boesing, L. G. D. Silveira, M. Veronez, C. Cazarin
{"title":"MOSIS: Immersive Virtual Field Environments for Earth Sciences","authors":"Pedro Rossa, Rafael Kenji Horota, A. M. Junior, A. S. Aires, E. Souza, Gabriel Lanzer Kannenberg, Jean Luca de Fraga, L. Santana, Demetrius Nunes Alves, Julia Boesing, L. G. D. Silveira, M. Veronez, C. Cazarin","doi":"10.1109/VR.2019.8797909","DOIUrl":"https://doi.org/10.1109/VR.2019.8797909","url":null,"abstract":"For the past decades, environmental studies have been mostly a field activity, especially when concerning geosciences, where rock exposures could not be represented or taken into laboratories. Besides that, VR (Virtual Reality) is growing in many academic areas as an important technology to represent 3D objects, bringing immersion to the most simple tasks. Following that trend, MOSIS (Multi Outcrop Sharing and Interpretation System) was created to help earth scientists and other users to visualize and study VFEs (Virtual Field Environments) from all over the world in immersive virtual reality.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127474894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lili Wang, Han Zhao, Zesheng Wang, Jian Wu, Bingqiang Li, Zhiming He, V. Popescu
{"title":"Occlusion Management in VR: A Comparative Study","authors":"Lili Wang, Han Zhao, Zesheng Wang, Jian Wu, Bingqiang Li, Zhiming He, V. Popescu","doi":"10.1109/VR.2019.8798025","DOIUrl":"https://doi.org/10.1109/VR.2019.8798025","url":null,"abstract":"VR applications rely on the user's ability to explore the virtual scene efficiently. In complex scenes, occlusions limit what the user can see from a given location, and the user has to navigate the viewpoint around occluders to gain line of sight to the hidden parts of the scene. When the disoccluded regions prove to be of no interest, the user has to retrace their path, making scene exploration inefficient. Furthermore, the user might not be able to assume a viewpoint that would reveal the occluded regions due to physical limitations, such as obstacles in the real world hosting the VR application, viewpoints beyond the tracked area, or viewpoints above the user's head that cannot be reached by walking. Several occlusion management methods have been proposed in visualization research, such as top view, X-ray, and multiperspective visualization, which help the user see more from the current position, having the potential to improve the exploration efficiency of complex scenes. This paper reports on a study that investigates the potential of these three occlusion management methods in the context of VR applications, compared to conventional navigation. Participants were required to explore two virtual scenes to purchase five items in a virtual Supermarket, and to find three people in a virtual parking garage. The task performance metrics were task completion time, total distance traveled, and total head rotation. The study also measured user spatial awareness, depth perception, and simulator sickness. The results indicate that users benefit from top view visualization which helps them learn the scene layout and helps them understand their position within the scene, but the top view does not let the user find targets easily due to occlusions in the vertical direction, and due to the small image footprint of the targets. The X-ray visualization method worked better in the garage scene, a scene with a few big occluders and a low occlusion depth complexity’ and less well in the Supermarket scene, a scene with many small occluders that create high occlusion depth complexity. The multi-perspective visualization method achieves better performance than the top view method and the X-ray method, in both scenes. There are no significant differences between the three methods and the conventional method in terms of spatial awareness, depth perception, and simulator sickness.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122593140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nattaon Techasarntikul, T. Mashita, P. Ratsamee, Yuuki Uranishi, H. Takemura, J. Orlosky, K. Kiyokawa
{"title":"Evaluation of Pointing Interfaces with an AR Agent for Multi-section Information Guidance","authors":"Nattaon Techasarntikul, T. Mashita, P. Ratsamee, Yuuki Uranishi, H. Takemura, J. Orlosky, K. Kiyokawa","doi":"10.1109/VR.2019.8798061","DOIUrl":"https://doi.org/10.1109/VR.2019.8798061","url":null,"abstract":"In educational settings such as art galleries or museums, Augmented Reality (AR) has the potential to provide detailed information about exhibits. However, dealing with items that contain information in multiple sections or areas is still a significant challenge. For example, a large painting may contain many minute details, which requires a system that can explain its broader features rather than just a generic description. To address this challenge, we introduce an AR guidance system that uses an embodied agent to point out items and explain each piece and part of exhibit items in detail. We also designed and tested 3 different pointing interfaces for the embodied agent: gesture only, gesture with a dot laser, and gesture with line laser. To evaluate this interface, we conducted a user experiment simulating painting guidance to test interest and exhibit memory. During the experiment, the agent pointed to various areas of interest in the painting and provided a detailed description to participants. The result shows that the search times for target positions were the fastest with the line laser. However, no particular interface outperformed others in memory recall of exhibit content.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123391719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Tauscher, F. W. Schottky, S. Grogorick, P. M. Bittner, Maryam Mustafa, M. Magnor
{"title":"Immersive EEG: Evaluating Electroencephalography in Virtual Reality","authors":"J. Tauscher, F. W. Schottky, S. Grogorick, P. M. Bittner, Maryam Mustafa, M. Magnor","doi":"10.1109/VR.2019.8797858","DOIUrl":"https://doi.org/10.1109/VR.2019.8797858","url":null,"abstract":"We investigate the feasibility of combining off-the-shelf virtual reality headsets and electroencephalography. EEG is a highly sensitive tool and subject to strong distortions when exerting physical force like mounting a VR headset on top of it that twists sensors and cables. Our study compares the signal quality of EEG in VR against immersive dome environments and traditional displays using an oddball paradigm experimental design. Furthermore, we compare the signal quality of EEG when combined with a commodity VR headset without modification against a modified version that reduces physical strain on the EEG headset. Our results indicate, that it is possible to combine EEG and VR even without modification under certain conditions. VR headset customisation improves signal quality results. Additionally, display latency of the different modalities is visible on a neurological level.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114943379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}