{"title":"The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality","authors":"Elena Kokkinara, M. Slater, Joan López-Moliner","doi":"10.1145/2818998","DOIUrl":"https://doi.org/10.1145/2818998","url":null,"abstract":"We easily adapt to changes in the environment that involve cross-sensory discrepancies (e.g., between vision and proprioception). Adaptation can lead to changes in motor commands so that the experienced sensory consequences are appropriate for the new environment (e.g., we program a movement differently while wearing prisms that shift our visual space). In addition to these motor changes, perceptual judgments of space can also be altered (e.g., how far can I reach with my arm?). However, in previous studies that assessed perceptual judgments of space after visuomotor adaptation, the manipulation was always a planar spatial shift, whereas changes in body perception could not directly be assessed. In this study, we investigated the effects of velocity-dependent (spatiotemporal) and spatial scaling distortions of arm movements on space and body perception, taking advantage of immersive virtual reality. Exploiting the perceptual illusion of embodiment in an entire virtual body, we endowed subjects with new spatiotemporal or spatial 3D mappings between motor commands and their sensory consequences. The results imply that spatiotemporal manipulation of 2 and 4 times faster can significantly change participants’ proprioceptive judgments of a virtual object’s size without affecting the perceived body ownership, although it did affect the agency of the movements. Equivalent spatial manipulations of 11 and 22 degrees of angular offset also had a significant effect on the perceived virtual object’s size; however, the mismatched information did not affect either the sense of body ownership or agency. We conclude that adaptation to spatial and spatiotemporal distortion can similarly change our perception of space, although spatiotemporal distortions can more easily be detected.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"35 1","pages":"3:1-3:22"},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81694315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Krzysztof Krejtz, A. Duchowski, T. Szmidt, I. Krejtz, Fernando González Perilli, A. Pires, A. Vilaró, N. Villalobos
{"title":"Gaze Transition Entropy","authors":"Krzysztof Krejtz, A. Duchowski, T. Szmidt, I. Krejtz, Fernando González Perilli, A. Pires, A. Vilaró, N. Villalobos","doi":"10.1145/2834121","DOIUrl":"https://doi.org/10.1145/2834121","url":null,"abstract":"This article details a two-step method of quantifying eye movement transitions between areas of interest (AOIs). First, individuals' gaze switching patterns, represented by fixated AOI sequences, are modeled as Markov chains. Second, Shannon's entropy coefficient of the fit Markov model is computed to quantify the complexity of individual switching patterns. To determine the overall distribution of attention over AOIs, the entropy coefficient of individuals' stationary distribution of fixations is calculated. The novelty of the method is that it captures the variability of individual differences in eye movement characteristics, which are then summarized statistically. The method is demonstrated on gaze data collected from two studies, during free viewing of classical art paintings. Normalized Shannon's entropy, derived from individual transition matrices, is related to participants' individual differences as well as to either their aesthetic impression or recognition of artwork. Low transition and high stationary entropies suggest greater curiosity mixed with a higher subjective aesthetic affinity toward artwork, possibly indicative of visual scanning of the artwork in a more deliberate way. Meanwhile, both high transition and stationary entropies may be indicative of recognition of familiar artwork.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"50 1","pages":"4:1-4:20"},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90333965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Kiiski, Ludovic Hoyet, Andrew T. Woods, C. O'Sullivan, F. Newell
{"title":"Strutting Hero, Sneaking Villain: Utilizing Body Motion Cues to Predict the Intentions of Others","authors":"H. Kiiski, Ludovic Hoyet, Andrew T. Woods, C. O'Sullivan, F. Newell","doi":"10.1145/2791293","DOIUrl":"https://doi.org/10.1145/2791293","url":null,"abstract":"A better understanding of how intentions and traits are perceived from body movements is required for the design of more effective virtual characters that behave in a socially realistic manner. For this purpose, realistic body motion, captured from human movements, is being used more frequently for creating characters with natural animations in games and entertainment. However, it is not always clear for programmers and designers which specific motion parameters best convey specific information such as certain emotions, intentions, or traits. We conducted two experiments to investigate whether the perceived traits of actors could be determined from their body motion, and whether these traits were associated with their perceived intentions. We first recorded body motions from 26 professional actors, who were instructed to move in a “hero”-like or a “villain”-like manner. In the first experiment, 190 participants viewed individual video recordings of these actors and were required to provide ratings to the body motion stimuli along a series of different cognitive dimensions (intentions, attractiveness, dominance, trustworthiness, and distinctiveness). The intersubject ratings across observers were highly consistent, suggesting that social traits are readily determined from body motion. Moreover, correlational analyses between these ratings revealed consistent associations across traits, for example, that perceived “good” intentions were associated with higher ratings of attractiveness and dominance. Experiment 2 was designed to elucidate the qualitative body motion cues that were critical for determining specific intentions and traits from the hero- and villain-like body movements. The results revealed distinct body motions that were readily associated with the perception of either “good” or “bad” intentions. Moreover, regression analyses revealed that these ratings accurately predicted the perception of the portrayed character type. These findings indicate that intentions and social traits are communicated effectively via specific sets of body motion features. Furthermore, these results have important implications for the design of the motion of virtual characters to convey desired social information.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"13 1","pages":"1:1-1:21"},"PeriodicalIF":1.6,"publicationDate":"2015-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75452961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takahiro Kawabe, Taiki Fukiage, Masataka Sawayama, S. Nishida
{"title":"Deformation Lamps: A Projection Technique to Make Static Objects Perceptually Dynamic","authors":"Takahiro Kawabe, Taiki Fukiage, Masataka Sawayama, S. Nishida","doi":"10.1145/2874358","DOIUrl":"https://doi.org/10.1145/2874358","url":null,"abstract":"Light projection is a powerful technique that can be used to edit the appearance of objects in the real world. Based on pixel-wise modification of light transport, previous techniques have successfully modified static surface properties such as surface color, dynamic range, gloss, and shading. Here, we propose an alternative light projection technique that adds a variety of illusory yet realistic distortions to a wide range of static 2D and 3D projection targets. The key idea of our technique, referred to as (Deformation Lamps), is to project only dynamic luminance information, which effectively activates the motion (and shape) processing in the visual system while preserving the color and texture of the original object. Although the projected dynamic luminance information is spatially inconsistent with the color and texture of the target object, the observer's brain automatically combines these sensory signals in such a way as to correct the inconsistency across visual attributes. We conducted a psychophysical experiment to investigate the characteristics of the inconsistency correction and found that the correction was critically dependent on the retinal magnitude of the inconsistency. Another experiment showed that the perceived magnitude of image deformation produced by our techniques was underestimated. The results ruled out the possibility that the effect obtained by our technique stemmed simply from the physical change in an object's appearance by light projection. Finally, we discuss how our techniques can make the observers perceive a vivid and natural movement, deformation, or oscillation of a variety of static objects, including drawn pictures, printed photographs, sculptures with 3D shading, and objects with natural textures including human bodies.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"43 1","pages":"10:1-10:17"},"PeriodicalIF":1.6,"publicationDate":"2015-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84764483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rebekka S. Renner, Erik Steindecker, Mathias Müller, B. Velichkovsky, R. Stelzer, S. Pannasch, J. Helmert
{"title":"The Influence of the Stereo Base on Blind and Sighted Reaches in a Virtual Environment","authors":"Rebekka S. Renner, Erik Steindecker, Mathias Müller, B. Velichkovsky, R. Stelzer, S. Pannasch, J. Helmert","doi":"10.1145/2724716","DOIUrl":"https://doi.org/10.1145/2724716","url":null,"abstract":"In virtual environments, perceived distances are frequently reported to be shorter than intended. One important parameter for spatial perception in a stereoscopic virtual environment is the stereo base—that is, the distance between the two viewing cameras. We systematically varied the stereo base relative to the interpupillary distance (IPD) and examined influences on distance and size perception. Furthermore, we tested whether an individual adjustment of the stereo base through an alignment task would reduce the errors in distance estimation. Participants performed reaching movements toward a virtual tennis ball either with closed eyes (blind reaches) or open eyes (sighted reaches). Using the participants' individual IPD, the stereo base was set to (a) the IPD, (b) proportionally smaller, (c) proportionally larger, or (d) adjusted according to the individual performance in an alignment task that was conducted beforehand. Overall, consistent with previous research, distances were underestimated. As expected, with a smaller stereo base, the virtual object was perceived as being farther away and bigger, in contrast to a larger stereo base, where the virtual object was perceived to be nearer and smaller. However, the manipulation of the stereo base influenced blind reaching estimates to a smaller extent than expected, which might be due to a combination of binocular disparity and pictorial depth cues. In sighted reaching, when visual feedback was available, presumably the use of disparity matching led to a larger effect of the stereo base. The use of an individually adjusted stereo base diminished the average underestimation but did not reduce interindividual variance. Interindividual differences were task specific and could not be explained through differences in stereo acuity or fixation disparity.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"9 1","pages":"7:1-7:18"},"PeriodicalIF":1.6,"publicationDate":"2015-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78745206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Affordance Judgments in HMD-Based Virtual Environments: Stepping over a Pole and Stepping off a Ledge","authors":"Qiufeng Lin, J. Rieser, Bobby Bodenheimer","doi":"10.1145/2720020","DOIUrl":"https://doi.org/10.1145/2720020","url":null,"abstract":"People judge what they can and cannot do all the time when acting in the physical world. Can I step over that fence or do I need to duck under it? Can I step off of that ledge or do I need to climb off of it? These qualities of the environment that people perceive that allow them to act are called affordances. This article compares people’s judgments of affordances on two tasks in both the real world and in virtual environments presented with head-mounted displays. The two tasks were stepping over or ducking under a pole, and stepping straight off of a ledge. Comparisons between the real world and virtual environments are important because they allow us to evaluate the fidelity of virtual environments. Another reason is that virtual environment technologies enable precise control of the myriad perceptual cues at work in the physical world and deepen our understanding of how people use vision to decide how to act. In the experiments presented here, the presence or absence of a self-avatar—an animated graphical representation of a person embedded in the virtual environment—was a central factor. Another important factor was the presence or absence of action, that is, whether people performed the task or reported that they could or could not perform the task. The results show that animated self-avatars provide critical information for people deciding what they can and cannot do in virtual environments, and that action is significant in people’s affordance judgments.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"39 1","pages":"6:1-6:21"},"PeriodicalIF":1.6,"publicationDate":"2015-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84988637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Auditory Distance Presentation in an Urban Augmented Reality Environment","authors":"R. Albrecht, T. Lokki","doi":"10.1145/2723568","DOIUrl":"https://doi.org/10.1145/2723568","url":null,"abstract":"Presenting points of interest in the environment by means of audio augmented reality offers benefits compared with traditional visual augmented reality and map-based approaches. However, presentation of distant virtual sound sources is problematic. This study looks at combining well-known auditory distance cues to convey the distance of points of interest. The results indicate that although the provided cues are intuitively mapped to relatively short distances, users can with only little training learn to map these cues to larger distances.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"4 1","pages":"5:1-5:19"},"PeriodicalIF":1.6,"publicationDate":"2015-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74617743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kamyar Abhari, John Stuart Haberl Baxter, Ali R. Khan, T. Peters, S. Ribaupierre, R. Eagleson
{"title":"Visual Enhancement of MR Angiography Images to Facilitate Planning of Arteriovenous Malformation Interventions","authors":"Kamyar Abhari, John Stuart Haberl Baxter, Ali R. Khan, T. Peters, S. Ribaupierre, R. Eagleson","doi":"10.1145/2701425","DOIUrl":"https://doi.org/10.1145/2701425","url":null,"abstract":"The primary purpose of medical image visualization is to improve patient outcomes by facilitating the inspection, analysis, and interpretation of patient data. This is only possible if the users’ perceptual and cognitive limitations are taken into account during every step of design, implementation, and evaluation of interactive displays. Visualization of medical images, if executed effectively and efficiently, can empower physicians to explore patient data rapidly and accurately with minimal cognitive effort. This article describes a specific case study in biomedical visualization system design and evaluation, which is the visualization of MR angiography images for planning arteriovenous malformation (AVM) interventions. The success of an AVM intervention greatly depends on the surgeon gaining a full understanding of the anatomy of the malformation and its surrounding structures. Accordingly, the purpose of this study was to investigate the usability of visualization modalities involving contour enhancement and stereopsis in the identification and localization of vascular structures using objective user studies. Our preliminary results indicate that contour enhancement, particularly when combined with stereopsis, results in improved performance enhancement of the perception of connectivity and relative depth between different structures.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"54 1","pages":"4:1-4:15"},"PeriodicalIF":1.6,"publicationDate":"2015-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76565267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effect of Wrinkles, Presentation Mode, and Intensity on the Perception of Facial Actions and Full-Face Expressions of Laughter","authors":"Radoslaw Niewiadomski, C. Pelachaud","doi":"10.1145/2699255","DOIUrl":"https://doi.org/10.1145/2699255","url":null,"abstract":"This article focuses on the identification and perception of facial action units displayed alone as well as the meaning decoding and perception of full-face synthesized expressions of laughter. We argue that the adequate representation of single action units is important in the decoding and perception of full-face expressions. In particular, we focus on three factors that may influence the identification and perception of single actions and full-face expressions: their presentation mode (static vs. dynamic), their intensity, and the presence of wrinkles.\u0000 For the purpose of this study, we used a hybrid approach for animation synthesis that combines data-driven and procedural animations with synthesized wrinkles generated using a bump mapping method. Using such animation technique, we created animations of single action units and full-face movements of two virtual characters. Next, we conducted two studies to evaluate the role of presentation mode, intensity, and wrinkles in single actions and full-face context-free expressions. Our evaluation results show that intensity and presentation mode influence (1) the identification of single action units and (2) the perceived quality of the animation. At the same time, wrinkles (3) are useful in the identification of a single action unit and (4) influence the perceived meaning attached to the animation of full-face expressions. Thus, all factors are important for successful communication of expressions displayed by virtual characters.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"6 1","pages":"2:1-2:21"},"PeriodicalIF":1.6,"publicationDate":"2015-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86677591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Leyrer, Sally A. Linkenauger, H. Bülthoff, B. Mohler
{"title":"Eye Height Manipulations: A Possible Solution to Reduce Underestimation of Egocentric Distances in Head-Mounted Displays","authors":"Markus Leyrer, Sally A. Linkenauger, H. Bülthoff, B. Mohler","doi":"10.1145/2699254","DOIUrl":"https://doi.org/10.1145/2699254","url":null,"abstract":"Virtual reality technology can be considered a multipurpose tool for diverse applications in various domains, for example, training, prototyping, design, entertainment, and research investigating human perception. However, for many of these applications, it is necessary that the designed and computer-generated virtual environments are perceived as a replica of the real world. Many research studies have shown that this is not necessarily the case. Specifically, egocentric distances are underestimated compared to real-world estimates regardless of whether the virtual environment is displayed in a head-mounted display or on an immersive large-screen display. While the main reason for this observed distance underestimation is still unknown, we investigate a potential approach to reduce or even eliminate this distance underestimation. Building up on the angle of declination below the horizon relationship for perceiving egocentric distances, we describe how eye height manipulations in virtual reality should affect perceived distances. In addition, we describe how this relationship could be exploited to reduce distance underestimation for individual users. In a first experiment, we investigate the influence of a manipulated eye height on an action-based measure of egocentric distance perception. We found that eye height manipulations have similar predictable effects on an action-based measure of egocentric distance as we previously observed for a cognitive measure. This might make this approach more useful than other proposed solutions across different scenarios in various domains, for example, for collaborative tasks. In three additional experiments, we investigate the influence of an individualized manipulation of eye height to reduce distance underestimation in a sparse-cue and a rich-cue environment. In these experiments, we demonstrate that a simple eye height manipulation can be used to selectively alter perceived distances on an individual basis, which could be helpful to enable every user to have an experience close to what was intended by the content designer.","PeriodicalId":50921,"journal":{"name":"ACM Transactions on Applied Perception","volume":"43 1","pages":"1:1-1:23"},"PeriodicalIF":1.6,"publicationDate":"2015-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85345364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}