{"title":"When Scents Help Me Remember My Password","authors":"Anas Ali Alkasasbeh, F. Spyridonis, G. Ghinea","doi":"10.1145/3469889","DOIUrl":"https://doi.org/10.1145/3469889","url":null,"abstract":"Current authentication processes overwhelmingly rely on audiovisual data, comprising images, text or audio. However, the use of olfactory data (scents) has remained unexploited in the authentication process, notwithstanding their verified potential to act as cues for information recall. Accordingly, in this paper, a new authentication process is proposed in which olfactory media are used as cues in the login phase. To this end, PassSmell , a proof of concept authentication application, is developed in which words and olfactory media act as passwords and olfactory passwords, respectively. In order to evaluate the potential of PassSmell, two different versions were developed, namely one which was olfactory-enhanced and another which did not employ olfactory media. Forty-two participants were invited to take part in the experiment, evenly split into a control and experimental group. For assessment purposes, we recorded the time taken to logon as well as the number of failed/successful login attempts; we also asked users to complete a Quality of Experience (QoE) questionnaire. In terms of time taken, a significant difference was found between the experimental and the control groups, as determined by an independent sample t-test. Similar results were found with respect to average scores and the number of successful attempts. Regarding user QoE, having olfactory media with words influenced the users positively, emphasizing the potential of using this kind of authentication application in the future.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125631446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational Model for Global Contour Precedence Based on Primary Visual Cortex Mechanisms","authors":"Hui Wei, Jingmeng Li","doi":"10.1145/3459999","DOIUrl":"https://doi.org/10.1145/3459999","url":null,"abstract":"The edges of an image contains rich visual cognitive cues. However, the edge information of a natural scene usually is only a set of disorganized unorganized pixels for a computer. In psychology, the phenomenon of quickly perceiving global information from a complex pattern is called the global precedence effect (GPE). For example, when one observes the edge map of an image, some contours seem to automatically “pop out” from the complex background. This is a manifestation of GPE on edge information and is called global contour precedence (GCP). The primary visual cortex (V1) is closely related to the processing of edges. In this article, a neural computational model to simulate GCP based on the mechanisms of V1 is presented. There are three layers in the proposed model: the representation of line segments, organization of edges, and perception of global contours. In experiments, the ability to group edges is tested on the public dataset BSDS500. The results show that the grouping performance, robustness, and time cost of the proposed model are superior to those of other methods. In addition, the outputs of the proposed model can also be applied to the generation of object proposals, which indicates that the proposed model can contribute significantly to high-level visual tasks.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132976455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Reed, H. Tan, Yang Jiao, Zachary D. Perez, E. Wilson
{"title":"Identification of Words and Phrases Through a Phonemic-Based Haptic Display","authors":"C. Reed, H. Tan, Yang Jiao, Zachary D. Perez, E. Wilson","doi":"10.1145/3458725","DOIUrl":"https://doi.org/10.1145/3458725","url":null,"abstract":"Stand-alone devices for tactile speech reception serve a need as communication aids for persons with profound sensory impairments as well as in applications such as human-computer interfaces and remote communication when the normal auditory and visual channels are compromised or overloaded. The current research is concerned with perceptual evaluations of a phoneme-based tactile speech communication device in which a unique tactile code was assigned to each of the 24 consonants and 15 vowels of English. The tactile phonemic display was conveyed through an array of 24 tactors that stimulated the dorsal and ventral surfaces of the forearm. Experiments examined the recognition of individual words as a function of the inter-phoneme interval (Study 1) and two-word phrases as a function of the inter-word interval (Study 2). Following an average training period of 4.3 hrs on phoneme and word recognition tasks, mean scores for the recognition of individual words in Study 1 ranged from 87.7% correct to 74.3% correct as the inter-phoneme interval decreased from 300 to 0 ms. In Study 2, following an average of 2.5 hours of training on the two-word phrase task, both words in the phrase were identified with an accuracy of 75% correct using an inter-word interval of 1 sec and an inter-phoneme interval of 150 ms. Effective transmission rates achieved on this task were estimated to be on the order of 30 to 35 words/min.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129809841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does What We See Shape History? Examining Workload History as a Function of Performance and Ambient/Focal Visual Attention","authors":"S. Devlin, Jennifer K. Byham, S. L. Riggs","doi":"10.1145/3449066","DOIUrl":"https://doi.org/10.1145/3449066","url":null,"abstract":"Changes in task demands can have delayed adverse impacts on performance. This phenomenon, known as the workload history effect, is especially of concern in dynamic work domains where operators manage fluctuating task demands. The existing workload history literature does not depict a consistent picture regarding how these effects manifest, prompting research to consider measures that are informative on the operator's process. One promising measure is visual attention patterns, due to its informativeness on various cognitive processes. To explore its ability to explain workload history effects, participants completed a task in an unmanned aerial vehicle command and control testbed where workload transitioned gradually and suddenly. The participants’ performance and visual attention patterns were studied over time to identify workload history effects. The eye-tracking analysis consisted of using a recently developed eye-tracking metric called coefficient K, as it indicates whether visual attention is more focal or ambient. The performance results found workload history effects, but it depended on the workload level, time elapsed, and performance measure. The eye-tracking analysis suggested performance suffered when focal attention was deployed during low workload, which was an unexpected finding. When synthesizing these results, they suggest unexpected visual attention patterns can impact performance immediately over time. Further research is needed; however, this work shows the value of including a real-time visual attention measure, such as coefficient K, as a means to understand how the operator manages varying task demands in complex work environments.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133517165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Holly C. Gagnon, Carlos Salas Rosales, Ryan Mileris, Jeanine K. Stefanucci, Sarah H. Creem-Regehr, Bobby Bodenheimer
{"title":"Estimating Distances in Action Space in Augmented Reality","authors":"Holly C. Gagnon, Carlos Salas Rosales, Ryan Mileris, Jeanine K. Stefanucci, Sarah H. Creem-Regehr, Bobby Bodenheimer","doi":"10.1145/3449067","DOIUrl":"https://doi.org/10.1145/3449067","url":null,"abstract":"Augmented reality (AR) is important for training complex tasks, such as navigation, assembly, and medical procedures. The effectiveness of such training may depend on accurate spatial localization of AR objects in the environment. This article presents two experiments that test egocentric distance perception in augmented reality within and at the boundaries of action space (up to 35 m) in comparison with distance perception in a matched real-world (RW) environment. Using the Microsoft HoloLens, in Experiment 1, participants in two different RW settings judged egocentric distances (ranging from 10 to 35 m) to an AR avatar or a real person using a visual matching measure. Distances to augmented targets were underestimated compared to real targets in the two indoor, RW contexts. Experiment 2 aimed to generalize the results to an absolute distance measure using verbal reports in one of the indoor environments. Similar to Experiment 1, distances to augmented targets were underestimated compared to real targets. We discuss these findings with respect to the importance of methodologies that directly compare performance in real and mediated environments, as well as the inherent differences present in mediated environments that are “matched” to the real world.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121163693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wanjoo Park, Muhammad Hassan Jamil, R. Gebremedhin, M. Eid
{"title":"Effects of Tactile Textures on Preference in Visuo-Tactile Exploration","authors":"Wanjoo Park, Muhammad Hassan Jamil, R. Gebremedhin, M. Eid","doi":"10.1145/3449065","DOIUrl":"https://doi.org/10.1145/3449065","url":null,"abstract":"The use of haptic technologies has recently become immensely essential in Human-Computer Interaction to improve user experience and performance. With the introduction of tactile feedback on a touchscreen device, commonly known as surface haptics, several applications and interaction paradigms have become a reality. However, the effects of tactile feedback on the preference of 2D images in visuo-tactile exploration task on touchscreen devices remain largely unknown. In this study, we investigated differences of preference score (the tendency of participants to like/dislike a 2D image based on its visual and tactile properties), reach time, interaction time, and response time under four conditions of feedback: no tactile feedback, high-quality of tactile information (sharp tactile texture), low-quality of tactile information (blurred tactile texture), and incorrect tactile information (mismatch tactile texture). The tactile feedback is rendered in the form of roughness that is simulated by modulating the friction between the finger and the surface and is derived from the 2D image. Thirty-six participants completed visuo-tactile exploration tasks for a total of 36 trials (3 2D images × 4 tactile textures × 3 repetitions). Results showed that the presence of tactile feedback enhanced users’ preference (tactile feedback conditions were rated significantly higher than the no tactile feedback condition for preference regardless of the quality/correctness of tactile feedback). This finding is also supported through results from self-reporting where 88.89% of participants preferred to experience the 2D image with tactile feedback. Additionally, the presence of tactile feedback resulted in significantly larger interaction time and response time compared to the no tactile feedback condition. Furthermore, the quality and correctness of tactile information significantly impacted the preference rating (sharp tactile textures were rated statistically higher than blurred tactile and mismatched tactile textures). All of these findings demonstrate that tactile feedback plays a crucial role in users’ preference and thus motivates further the development of surface haptic technologies.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129680579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye Tracking Interaction on Unmodified Mobile VR Headsets Using the Selfie Camera","authors":"P. Drakopoulos, G. Koulieris, K. Mania","doi":"10.1145/3456875","DOIUrl":"https://doi.org/10.1145/3456875","url":null,"abstract":"Input methods for interaction in smartphone-based virtual and mixed reality (VR/MR) are currently based on uncomfortable head tracking controlling a pointer on the screen. User fixations are a fast and natural input method for VR/MR interaction. Previously, eye tracking in mobile VR suffered from low accuracy, long processing time, and the need for hardware add-ons such as anti-reflective lens coating and infrared emitters. We present an innovative mobile VR eye tracking methodology utilizing only the eye images from the front-facing (selfie) camera through the headset’s lens, without any modifications. Our system first enhances the low-contrast, poorly lit eye images by applying a pipeline of customised low-level image enhancements suppressing obtrusive lens reflections. We then propose an iris region-of-interest detection algorithm that is run only once. This increases the iris tracking speed by reducing the iris search space in mobile devices. We iteratively fit a customised geometric model to the iris to refine its coordinates. We display a thin bezel of light at the top edge of the screen for constant illumination. A confidence metric calculates the probability of successful iris detection. Calibration and linear gaze mapping between the estimated iris centroid and physical pixels on the screen results in low latency, real-time iris tracking. A formal study confirmed that our system’s accuracy is similar to eye trackers in commercial VR headsets in the central part of the headset’s field-of-view. In a VR game, gaze-driven user completion time was as fast as with head-tracked interaction, without the need for consecutive head motions. In a VR panorama viewer, users could successfully switch between panoramas using gaze.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133010719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Davit Gigilashvili, Weiqi Shi, Zeyu Wang, Marius Pedersen, J. Hardeberg, H. Rushmeier
{"title":"The Role of Subsurface Scattering in Glossiness Perception","authors":"Davit Gigilashvili, Weiqi Shi, Zeyu Wang, Marius Pedersen, J. Hardeberg, H. Rushmeier","doi":"10.1145/3458438","DOIUrl":"https://doi.org/10.1145/3458438","url":null,"abstract":"This study investigates the potential impact of subsurface light transport on gloss perception for the purposes of broadening our understanding of visual appearance in computer graphics applications. Gloss is an important attribute for characterizing material appearance. We hypothesize that subsurface scattering of light impacts the glossiness perception. However, gloss has been traditionally studied as a surface-related quality and the findings in the state-of-the-art are usually based on fully opaque materials, although the visual cues of glossiness can be impacted by light transmission as well. To address this gap and to test our hypothesis, we conducted psychophysical experiments and found that subjects are able to tell the difference in terms of gloss between stimuli that differ in subsurface light transport but have identical surface qualities and object shape. This gives us a clear indication that subsurface light transport contributes to a glossy appearance. Furthermore, we conducted additional experiments and found that the contribution of subsurface scattering to gloss varies across different shapes and levels of surface roughness. We argue that future research on gloss should include transparent and translucent media and to extend the perceptual models currently limited to surface scattering to more general ones inclusive of subsurface light transport.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123062425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Adhanom, Majed Al-Zayer, P. MacNeilage, Eelke Folmer
{"title":"Field-of-View Restriction to Reduce VR Sickness Does Not Impede Spatial Learning in Women","authors":"I. Adhanom, Majed Al-Zayer, P. MacNeilage, Eelke Folmer","doi":"10.1145/3448304","DOIUrl":"https://doi.org/10.1145/3448304","url":null,"abstract":"Women are more likely to experience virtual reality (VR) sickness than men, which could pose a major challenge to the mass market success of VR. Because VR sickness often results from a visual-vestibular conflict, an effective strategy to mitigate conflict is to restrict the user’s field-of-view (FOV) during locomotion. Sex differences in spatial cognition have been well researched, with several studies reporting that men exhibit better spatial navigation performance in desktop three-dimensional environments than women. However, additional research suggests that this sex difference can be mitigated by providing a larger FOV as this increases the availability of landmarks, which women tend to rely on more than men. Though FOV restriction is already a widely used strategy for VR headsets to minimize VR sickness, it is currently not well understood if it impedes spatial learning in women due to decreased availability of landmarks. Our study (n=28, 14 men and 14 women) found that a dynamic FOV restrictor was equally effective in reducing VR sickness in both sexes, and no sex differences in VR sickness incidence were found. Our study did find a sex difference in spatial learning ability, but an FOV restrictor did not impede spatial learning in either sex.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132171814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spot the Difference","authors":"Kiwon Um, Xiangyu Y. Hu, Bing Wang, N. Thuerey","doi":"10.1145/3449064","DOIUrl":"https://doi.org/10.1145/3449064","url":null,"abstract":"Comparative evaluation lies at the heart of science, and determining the accuracy of a computational method is crucial for evaluating its potential as well as for guiding future efforts. However, metrics that are typically used have inherent shortcomings when faced with the under-resolved solutions of real-world simulation problems. We show how to leverage the human visual system in conjunction with crowd-sourced user studies to address the fundamental problems of widely used classical evaluation metrics. We demonstrate that such user studies driven by visual perception yield a very robust metric and consistent answers for complex phenomena without any requirements for proficiency regarding the physics at hand. This holds even for cases away from convergence where traditional metrics often end up with inconclusive results. More specifically, we evaluate results of different essentially non-oscillatory (ENO) schemes in different fluid flow settings. Our methodology represents a novel and practical approach for scientific evaluations that can give answers for previously unsolved problems.","PeriodicalId":356693,"journal":{"name":"ACM Transactions on Applied Perception (TAP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130011638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}