{"title":"Proceedings of the 15th ACM Symposium on Applied Perception","authors":"","doi":"10.1145/3225153","DOIUrl":"https://doi.org/10.1145/3225153","url":null,"abstract":"","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130209195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating perception time in the far peripheral vision for virtual and augmented reality","authors":"Xuetong Sun, A. Varshney","doi":"10.1145/3225153.3225160","DOIUrl":"https://doi.org/10.1145/3225153.3225160","url":null,"abstract":"Far peripheral vision (beyond 60° eccentricity) is beginning to be supported in the latest virtual and augmented reality (VR and AR) headsets. This benefits the VR and AR experiences by allowing a greater amount of information to be conveyed, reducing visual clutter, and enabling subtle visual attention management. However, the visual properties of the far periphery are different from those of the central vision, because of the physiological differences between the areas on the visual cortex responsible for the respective vision types. In this paper, we investigate the perception time in the far peripheral vision, specifically the time it takes for a user to perceive a pattern at a high eccentricity. We have characterized the perception time in the far peripheral vision by conducting a user study on 40 participants in which the participants distinguish between two types of patterns displayed at several sizes and at various eccentricities in their field of view. Our results show that at higher eccentricities, participants take longer to perceive a pattern. Based on user study data, we are able to characterize the desired scaling of patterns at higher eccentricities, so that they can be perceived within a similar amount of time as in the central vision.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113949485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Jörg, A. Duchowski, Krzysztof Krejtz, Anna Niedzielska
{"title":"Perceptual adjustment of eyeball and pupil diameter jitter amplitudes for virtual characters","authors":"S. Jörg, A. Duchowski, Krzysztof Krejtz, Anna Niedzielska","doi":"10.1145/3225153.3243895","DOIUrl":"https://doi.org/10.1145/3225153.3243895","url":null,"abstract":"","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130958912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin P. Pfeil, E. Taranta, Arun K. Kulshreshth, P. Wisniewski, J. Laviola
{"title":"A comparison of eye-head coordination between virtual and physical realities","authors":"Kevin P. Pfeil, E. Taranta, Arun K. Kulshreshth, P. Wisniewski, J. Laviola","doi":"10.1145/3225153.3225157","DOIUrl":"https://doi.org/10.1145/3225153.3225157","url":null,"abstract":"Past research has shown that humans exhibit certain eye-head responses to the appearance of visual stimuli, and these natural reactions change during different activities. Our work builds upon these past observations by offering new insight to how humans behave in Virtual Reality (VR) compared to Physical Reality (PR). Using eye- and head- tracking technology, and by conducting a study on two groups of users - participants in VR or PR - we identify how often these natural responses are observed in both environments. We find that users statistically move their heads more often when viewing stimuli in VR than in PR, and VR users also move their heads more in the presence of text. We open a discussion for identifying the HWD factors that cause this difference, as this may not only affect predictive models using eye movements as features, but also VR user experience overall.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114910630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Schwind, Lorraine Lin, Massimiliano Di Luca, S. Jörg, James M. Hillis
{"title":"Touch with foreign hands: the effect of virtual hand appearance on visual-haptic integration","authors":"V. Schwind, Lorraine Lin, Massimiliano Di Luca, S. Jörg, James M. Hillis","doi":"10.1145/3225153.3225158","DOIUrl":"https://doi.org/10.1145/3225153.3225158","url":null,"abstract":"Hand tracking and haptics are gaining more importance as key technologies of virtual reality (VR) systems. For designing such systems, it is fundamental to understand how the appearance of the virtual hands influences user experience and how the human brain integrates vision and haptics. However, it is currently unknown whether multi-sensory integration of visual and haptic feedback can be influenced by the appearance of virtual hands in VR. We performed a user study in VR to gain insight into the effect of hand appearance on how the brain combines visual and haptic signals using a cue-conflict paradigm. In this paper, we show that the detection of surface irregularities (bumps and holes) sensed by eyes and hands is affected by the rendering of avatar hands. However, sensitivity changes do not correlate with the degree of perceived limb ownership. Qualitative feedback provides insights into potentially distracting cues in visual-haptic integration.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123844810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noorin Suhaila Asjad, Haley Adams, Richard A. Paris, Bobby Bodenheimer
{"title":"Perception of height in virtual reality: a study of climbing stairs","authors":"Noorin Suhaila Asjad, Haley Adams, Richard A. Paris, Bobby Bodenheimer","doi":"10.1145/3225153.3225171","DOIUrl":"https://doi.org/10.1145/3225153.3225171","url":null,"abstract":"Most virtual environments that people locomote through with head-mounted displays are flat to match the physical environment that people are actively walking on. In this paper we simulated stair climbing, and evaluated how well people could assess the distance they had climbed after several minutes of the activity under various conditions. We varied factors such as the presence of virtual feet (shoes), whether the stairwell was open or enclosed, the presence or absence of passive haptic markers, and whether a subject was ascending or descending. In general, the distance climbed or descended was overestimated, consistent with prior work on the perception of height. We find that subjects have significantly better ability to estimate their error with the presence of virtual shoes than without, and when the environment was open. Having shoes also resulted in significantly higher ratings of presence. We also find a significant tendency for females to show higher ratings of simulator sickness.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123487431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Thaler, Anna C. Wellerdiek, Markus Leyrer, E. Volkova-Volkmar, N. Troje, B. Mohler
{"title":"The role of avatar fidelity and sex on self-motion recognition","authors":"A. Thaler, Anna C. Wellerdiek, Markus Leyrer, E. Volkova-Volkmar, N. Troje, B. Mohler","doi":"10.1145/3225153.3225176","DOIUrl":"https://doi.org/10.1145/3225153.3225176","url":null,"abstract":"Avatars are important for games and immersive social media applications. Although avatars are still not complete digital copies of the user, they often aim to represent a user in terms of appearance (color and shape) and motion. Previous studies have shown that humans can recognize their own motions in point-light displays. Here, we investigated whether recognition of self-motion is dependent on the avatar's fidelity and the congruency of the avatar's sex with that of the participants. Participants performed different actions that were captured and subsequently remapped onto three different body representations: a point-light figure, a male, and a female virtual avatar. In the experiment, participants viewed the motions displayed on the three body representations and responded to whether the motion was their own. Our results show that there was no influence of body representation on self-motion recognition performance, participants were equally sensitive to recognize their own motion on the point-light figure and the virtual characters. In line with previous research, recognition performance was dependent on the action. Sensitivity was highest for uncommon actions, such as dancing and playing ping-pong, and was around chance level for running, suggesting that the degree of individuality of performing certain actions affects self-motion recognition performance. Our results show that people were able to recognize their own motions even when individual body shape cues were completely eliminated and when the avatar's sex differed from own. This suggests that people might rely more on kinematic information rather than shape and sex cues for recognizing own motion. This finding has important implications for avatar design in game and immersive social media applications.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116253659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Ludwig, G. Meyer, I. Tastl, N. Moroney, Melanie Gottwals
{"title":"An appearance uniformity metric for 3D printing","authors":"Michael Ludwig, G. Meyer, I. Tastl, N. Moroney, Melanie Gottwals","doi":"10.1145/3225153.3225169","DOIUrl":"https://doi.org/10.1145/3225153.3225169","url":null,"abstract":"A method is presented for perceptually characterizing appearance non-uniformities that result from 3D printing. In contrast to physical measurements, the model is designed to take into account the human visual system and variations in observer conditions such as lighting, point of view, and shape. Additionally, it is capable of handling spatial reflectance variations over a material's surface. Motivated by Schrödinger's line element approach to studying color differences, an image-based psychophysical experiment that explores paths between materials in appearance space is conducted. The line element concept is extended from color to spatially-varying appearances-including color, roughness and gloss-which enables the measurement of fine differences between appearances along a path. We define two path functions, one interpolating reflectance parameters and the other interpolating the final imagery. An image-based uniformity model is developed, applying a trained neural network to color differences calculated from rendered images of the printed non-uniformities. The final model is shown to perform better than commonly used image comparison algorithms, including spatial pattern classes that were not used in training.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"311 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117070530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anh Nguyen, Yannick Rothacher, B. Lenggenhager, P. Brugger, A. Kunz
{"title":"Individual differences and impact of gender on curvature redirection thresholds","authors":"Anh Nguyen, Yannick Rothacher, B. Lenggenhager, P. Brugger, A. Kunz","doi":"10.1145/3225153.3225155","DOIUrl":"https://doi.org/10.1145/3225153.3225155","url":null,"abstract":"To enable real walking in a virtual environment (VE) that is larger than the available physical space, redirection techniques that introduce multisensory conflicts between visual and nonvisual cues to manipulate different aspects of a user's trajectory could be applied. When applied within certain thresholds, these manipulations could go unnoticed and immersion remains intact. Research effort has been spent on identifying these thresholds and a wide range of thresholds was reported in different studies. These differences in thresholds could be explained by many factors such as individual differences, walking speed, or context settings such as environment design, cognitive load, distractors, etc. In this paper, we present a study to investigate the role of gender on curvature redirection thresholds (RDTs) using the maximum likelihood procedure with the classical two-alternative force choice task. Results show high variability in individuals' RDTs, and that on average women have higher curvature RDTs than men. Furthermore, results also confirm existing findings about the negative correlation between walking speed and curvature RDTs.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114279433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Weier, T. Roth, André Hinkenjann, P. Slusallek
{"title":"Foveated depth-of-field filtering in head-mounted displays","authors":"Martin Weier, T. Roth, André Hinkenjann, P. Slusallek","doi":"10.1145/3225153.3243894","DOIUrl":"https://doi.org/10.1145/3225153.3243894","url":null,"abstract":"In recent years, a variety of methods have been introduced to exploit the decrease in visual acuity of peripheral vision, known as foveated rendering. As more and more computationally involved shading is requested and display resolutions increase, maintaining low latencies is challenging when rendering in a virtual reality context. Here, foveated rendering is a promising approach for reducing the number of shaded samples. However, besides the reduction of the visual acuity, the eye is an optical system, filtering radiance through lenses. The lenses create depth-of-field (DoF) effects when accommodated to objects at varying distances. The central idea of this article is to exploit these effects as a filtering method to conceal rendering artifacts. To showcase the potential of such filters, we present a foveated rendering system, tightly integrated with a gaze-contingent DoF filter. Besides presenting benchmarks of the DoF and rendering pipeline, we carried out a perceptual study, showing that rendering quality is rated almost on par with full rendering when using DoF in our foveated mode, while shaded samples are reduced by more than 69%.","PeriodicalId":185507,"journal":{"name":"Proceedings of the 15th ACM Symposium on Applied Perception","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123672713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}