{"title":"Assessment of Driver Attention during a Safety Critical Situation in VR to Generate VR-based Training","authors":"Efe Bozkir, David Geisler, Enkelejda Kasneci","doi":"10.1145/3343036.3343138","DOIUrl":"https://doi.org/10.1145/3343036.3343138","url":null,"abstract":"Crashes involving pedestrians on urban roads can be fatal. In order to prevent such crashes and provide safer driving experience, adaptive pedestrian warning cues can help to detect risky pedestrians. However, it is difficult to test such systems in the wild, and train drivers using these systems in safety critical situations. This work investigates whether low-cost virtual reality (VR) setups, along with gaze-aware warning cues, could be used for driver training by analyzing driver attention during an unexpected pedestrian crossing on an urban road. Our analyses show significant differences in distances to crossing pedestrians, pupil diameters, and driver accelerator inputs when the warning cues were provided. Overall, there is a strong indication that VR and Head-Mounted-Displays (HMDs) could be used for generating attention increasing driver training packages for safety critical situations.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125719605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Albert H. van der Veer, Adrian J. T. Alsmith, M. Longo, Hong Yu Wong, D. Diers, Matthias Bues, Anna P. Giron, B. Mohler
{"title":"The Influence of the Viewpoint in a Self-Avatar on Body Part and Self-Localization","authors":"Albert H. van der Veer, Adrian J. T. Alsmith, M. Longo, Hong Yu Wong, D. Diers, Matthias Bues, Anna P. Giron, B. Mohler","doi":"10.1145/3343036.3343124","DOIUrl":"https://doi.org/10.1145/3343036.3343124","url":null,"abstract":"The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eye- or chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at several of their body parts (body part localization) as well as ”directly at you” (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the head) with the following distribution: the upper face (25%), the upper torso (25%), above the head (15%) and below the torso (12%). We only found an influence of viewpoint (eye- vs chest-height) during the self-avatar adaptation phase for body part localization and not for self-localization. The overall change in error distance for body part localization for the viewpoint at eye-height was small (M = –2.8 cm), while the overall change in error distance for the viewpoint at chest-height was significantly larger, and in the upwards direction relative to the body parts (M = 21.1 cm). In a post-questionnaire, there was no significant difference in embodiment scores between the viewpoint conditions. Most interestingly, having a self-avatar did not change the results on the self-localization pointing task, even with a novel viewpoint (chest-height). Possibly, body-based cues, or memory, ground the self when in VR. However, the present results caution the use of altered viewpoints in applications where veridical position sense of body parts is required.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123814887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Langbehn, Frank Steinicke, Ping Koo-Poeggel, L. Marshall, G. Bruder
{"title":"Stimulating the Brain in VR: Effects of Transcranial Direct-Current Stimulation on Redirected Walking","authors":"E. Langbehn, Frank Steinicke, Ping Koo-Poeggel, L. Marshall, G. Bruder","doi":"10.1145/3343036.3343125","DOIUrl":"https://doi.org/10.1145/3343036.3343125","url":null,"abstract":"Redirected walking (RDW) enables virtual reality (VR) users to explore large virtual environments (VE) in confined tracking spaces by guiding users on different paths in the real world than in the VE. However, so far, spaces larger than typical room-scale setups of 5m × 5m are still required to allow infinitely straight walking, i. e., to prevent a subjective mismatch between real and virtual paths. This mismatch could in theory be reduced by interacting with the underlying brain activity. Transcranial direct-current stimulation (tDCS) presents a simply method able to modify ongoing cortical activity and excitability levels. Hence, this approach provides enormous potential to widen detection thresholds for RDW, and consequently reduce the above mentioned space requirements. In this paper, we conducted a psychophysical experiment using tDCS to evaluate detection thresholds for RDW gains. In the stimulation conditon 1.25 mA cathodal tDCS were applid over the prefrontal cortex (AF4 with Pz for the return current) for 20 minutes. TDCS failed to exert a significant overall effect on detection thresholds. However, for the highest gain only, path deviance was significantly modified by tDCS. In addition, subjectively reported disorientation was significantly lower during the tDCS as compared to the sham condition. Along the same line, oculomotor cyber sickness symptoms after the session were significantly decreased compared to baseline in tDCS, while there was no significant effect in sham. This work presents the first use of tDCS during virtual walking which provides new vistas for future research in the area of neurostimulation in VR.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117161718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruiquan Mao, Manuel Lagunas, B. Masiá, D. Gutierrez
{"title":"The Effect of Motion on the Perception of Material Appearance","authors":"Ruiquan Mao, Manuel Lagunas, B. Masiá, D. Gutierrez","doi":"10.1145/3343036.3343122","DOIUrl":"https://doi.org/10.1145/3343036.3343122","url":null,"abstract":"We analyze the effect of motion in the perception of material appearance. First, we create a set of stimuli containing 72 realistic materials, rendered with varying degrees of linear motion blur. Then we launch a large-scale study on Mechanical Turk to rate a given set of perceptual attributes, such as brightness, roughness, or the perceived strength of reflections. Our statistical analysis shows that certain attributes undergo a significant change, varying appearance perception under motion. In addition, we further investigate the perception of brightness, for the particular cases of rubber and plastic materials. We create new stimuli, with ten different luminance levels and seven motion degrees. We launch a new user study to retrieve their perceived brightness. From the users’ judgements, we build two-dimensional maps showing how perceived brightness varies as a function of the luminance and motion of the material.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124735724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard A. Paris, Joshua Klag, P. Rajan, Lauren E. Buck, T. McNamara, Bobby Bodenheimer
{"title":"How Video Game Locomotion Methods Affect Navigation in Virtual Environments","authors":"Richard A. Paris, Joshua Klag, P. Rajan, Lauren E. Buck, T. McNamara, Bobby Bodenheimer","doi":"10.1145/3343036.3343131","DOIUrl":"https://doi.org/10.1145/3343036.3343131","url":null,"abstract":"Navigation, or the means by which people find their way in an environment, depends on the ability to combine information from multiple sources so that properties of an environment, such as the location of a goal, can be estimated. An important source of information for navigation are spatial cues generated by self-motion. Navigation based solely on body-based cues generated by self-motion is called path integration. In virtual reality and video games, many locomotion systems, that is, methods that move users through a virtual environment, can often distort or deprive users of important self-motion cues. There has been much study of this issue, and in this paper, we extend that study in novel directions by assessing the effect of four game-like locomotion interfaces on navigation performance using path integration. The salient features of our locomotion interfaces are that two are primarily continuous, i.e., more like a joystick, and two are primarily discrete, i.e., more like teleportation. Our main findings are that the perspective of path integration, people are able to use all methods, although continuous methods outperform discrete methods.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133666468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Jörg, A. Duchowski, Krzysztof Krejtz, Anna Niedzielska
{"title":"Perceptual Comparison of Procedural and Data-Driven Eye Motion Jitter","authors":"S. Jörg, A. Duchowski, Krzysztof Krejtz, Anna Niedzielska","doi":"10.1145/3343036.3343130","DOIUrl":"https://doi.org/10.1145/3343036.3343130","url":null,"abstract":"Research has shown that keyframed eye motions are perceived as more realistic when some noise is added to eyeball motions and to pupil size changes. We investigate whether this noise, in contrast to being motion captured, can be synthesized with standard techniques, e.g., procedural or data-driven approaches. In a two-alternative forced choice task, we compare eye animations created with four different techniques: motion captured, procedural, data-driven, and keyframed (lacking noise). Our perceptual experiment uses three character models with different levels of realism and two motions. Our results suggest that procedural and data-driven noise can be used to create animations at similar perceived naturalness to our motion captured approach. Participants’ eye movements when viewing the animations show that animations without jitter yielded fewer fixations, suggesting ease of dismissal as unnatural.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117002723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Omar Janeh, Nikolaos Katzakis, Jonathan Tong, Frank Steinicke
{"title":"Infinity Walk in VR: Effects of Cognitive Load on Velocity during Continuous Long-Distance Walking","authors":"Omar Janeh, Nikolaos Katzakis, Jonathan Tong, Frank Steinicke","doi":"10.1145/3343036.3343119","DOIUrl":"https://doi.org/10.1145/3343036.3343119","url":null,"abstract":"Bipedal walking is generally considered to be the most natural and common locomotion technique in the physical world, for humans, and the most presence-enhancing form of locomotion in virtual reality (VR). However, there are significant differences in the way people walk in VR compared to their walking behaviour in the real world. For instance, previous studies have shown a significant decrease of gait parameters, in particular, velocity and step length in the virtual environment (VE). However, those studies have only considered short periods of walking. In contrast, many VR applications involve extended exposures to the VE and often include additional cognitive tasks such as way-finding. Hence, it remains an open question whether velocity during VR walking will further slowdown over time or if users of VR will eventually speed-up and adapt their velocity to the VE and move with the same speed as in the real world. In this paper we present a study to compare the effects of cognitive task on velocity during long-distance walking in VR compared to walking in the real world. Therefore, we used an exact virtual replica model of the users’ real surrounding. To reliably evaluate locomotion performance, we analyzed walking velocity during long-distance walking. This was achieved by 60 consecutive cycles using a left/right figure-8 protocol, which avoids the limitations of treadmill and non-consecutive walking protocols (i. e., start-stop). The results show a significant decrease of velocity in the VE compared to the real world even after 60 consecutive cycles with and without the cognitive task.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132654656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Differences in Haptic and Visual Perception of Expressive 1DoF Motion","authors":"Elyse D. Z. Chase, Sean Follmer","doi":"10.1145/3343036.3343136","DOIUrl":"https://doi.org/10.1145/3343036.3343136","url":null,"abstract":"Humans can perceive motion through a variety of different modalities. Vision is a well explored modality; however haptics can greatly increase the richness of information provided to the user. The detailed differences in perception of motion between these two modalities are not well studied and can provide an additional avenue for communication between humans and haptic devices or robots. We analyze these differences in the context of users interactions with a non-anthropomorphic haptic device. In this study, participants experienced different levels and combinations of stiffness, jitter, and acceleration curves via a one degree of freedom linear motion display. These conditions were presented with and without the opportunity for users to touch the setup. Participants rated the experiences within the contexts of emotion, anthropomorphism, likeability, and safety using the SAM scale, HRI metrics, as well as with qualitative feedback. A positive correlation between stiffness and dominance, specifically due to the haptic condition, was found; additionally, with the introduction of jitter, decreases in perceived arousal and likeability were recorded. Trends relating acceleration curves to perceived dominance as well as stiffness and jitter to valence, arousal, dominance, likeability, and safety were also found. These results suggest the importance of considering which sensory modalities are more actively engaged during interactions and, concomitantly, which behaviors designers should employ in the creation of non-anthropomorphic interactive haptic devices to achieve a particular interpreted affective state.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134373382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matias Volonte, Reza Ghaiumy Anaraky, Bart P. Knijnenburg, A. Duchowski, Sabarish V. Babu
{"title":"Empirical Evaluation of the Interplay of Emotion and Visual Attention in Human-Virtual Human Interaction","authors":"Matias Volonte, Reza Ghaiumy Anaraky, Bart P. Knijnenburg, A. Duchowski, Sabarish V. Babu","doi":"10.1145/3343036.3343118","DOIUrl":"https://doi.org/10.1145/3343036.3343118","url":null,"abstract":"We examined the effect of rendering style and the interplay between attention and emotion in users during interaction with a virtual patient in a medical training simulator. The virtual simulation was rendered representing a sample from the photo-realistic to the non-photorealistic continuum, namely Near-Realistic, Cartoon or Pencil-Shader. In a mixed design study, we collected 45 participants’ emotional responses and gaze behavior using surveys and an eye tracker while interacting with a virtual patient who was medically deteriorating over time. We used a cross-lagged panel analysis of attention and emotion to understand their reciprocal relationship over time. We also performed a mediation analysis to compare the extent to which the virtual agent’s appearance and his affective behavior impacted users’ emotional and attentional responses. Results showed the interplay between participants’ visual attention and emotion over time and also showed that attention was a stronger variable than emotion during the interaction with the virtual human.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116397954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transsaccadic Awareness of Scene Transformations in a 3D Virtual Environment","authors":"Maryam Keyvanara, R. Allison","doi":"10.1145/3343036.3343121","DOIUrl":"https://doi.org/10.1145/3343036.3343121","url":null,"abstract":"In gaze-contingent displays, the viewer’s eye movement data are processed in real-time to adjust the graphical content. To provide a high-quality user experience, these graphical updates must occur with minimum delay. Such updates can be used to introduce imperceptible changes in virtual camera pose in applications such as networked gaming, collaborative virtual reality and redirected walking. For such applications, perceptual saccadic suppression can help to hide the graphical artifacts. We investigated whether the visibility of these updates depends on the type of image transformation. Users viewed 3D scenes in which the displacement of a target object triggered them to generate a vertical or horizontal saccade, during which a translation or rotation was applied to the virtual camera used to render the scene. After each trial, users indicated the direction of the scene change in a forced-choice task. Results show that type and size of the image transformation affected change detectability. During horizontal or vertical saccades, rotations along the roll axis were the most detectable, while horizontal and vertical translations were least noticed. We confirm that large 3D adjustments to the scene viewpoint can be introduced unobtrusively and with low latency during saccades, but the allowable extent of the correction varies with the transformation applied.","PeriodicalId":228010,"journal":{"name":"ACM Symposium on Applied Perception 2019","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124695679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}