Alia Saad, Dina Hisham Elkafrawy, Slim Abdennadher, Stefan Schneegass
{"title":"Are They Actually Looking? Identifying Smartphones Shoulder Surfing Through Gaze Estimation","authors":"Alia Saad, Dina Hisham Elkafrawy, Slim Abdennadher, Stefan Schneegass","doi":"10.1145/3379157.3391422","DOIUrl":"https://doi.org/10.1145/3379157.3391422","url":null,"abstract":"Mobile devices have evolved to be a crucial part of our everyday lives. However, they are subject to different types of user-centered attacks such as shoulder surfing attacks. Previous work focused on notifying the user with a potential shoulder surfer if an extra face is detected. Although it is a successful approach, it eliminated the possibility that the alleged attacker is just standing and not looking at the user’s device. In this work, we investigate estimating the gaze of potential attackers in order to verify if they are indeed looking at the user’s phone.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134401078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cognitive Load during Eye-typing","authors":"Tanya Bafna, J. P. Hansen, Per Baekgaard","doi":"10.1145/3379155.3391333","DOIUrl":"https://doi.org/10.1145/3379155.3391333","url":null,"abstract":"In this paper, we have measured cognitive load during an interactive eye-tracking task. Eye-typing was chosen as the task, because of its familiarity, ubiquitousness and ease. Experiments with 18 participants, where they memorized and eye-typed easy and difficult sentences over four days, were used to compare the difficulty levels of the tasks using subjective scores and eye-metrics like blink duration, frequency and interval and pupil dilation were explored, in addition to performance measures like typing speed, error rate and attended but not selected rate. Typing performance lowered with increased task difficulty, while blink frequency, duration and interval were higher for the difficult tasks. Pupil dilation indicated the memorization process, but did not demonstrate a difference between easy and difficult tasks.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134486171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sharath C. Koorathota, Kaveri A. Thakoor, Patrick Adelman, Yaoli Mao, Xueqing Liu, P. Sajda
{"title":"Sequence Models in Eye Tracking: Predicting Pupil Diameter During Learning","authors":"Sharath C. Koorathota, Kaveri A. Thakoor, Patrick Adelman, Yaoli Mao, Xueqing Liu, P. Sajda","doi":"10.1145/3379157.3391653","DOIUrl":"https://doi.org/10.1145/3379157.3391653","url":null,"abstract":"A deep learning framework for predicting pupil diameter using eye tracking data is described. Using a variety of input, such as fixation positions, durations, saccades and blink-related information, we assessed the performance of a sequence model in predicting future pupil diameter in a student population as they watched educational videos in a controlled setting. Through assessing student performance on a post-viewing test, we report that deep learning sequence models may be useful for separating components of pupil responses that are linked to luminance and accommodation from those that are linked to cognition and arousal.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124107388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural networks for optical vector and eye ball parameter estimation","authors":"Wolfgang Fuhl, Hong Gao, Enkelejda Kasneci","doi":"10.1145/3379156.3391346","DOIUrl":"https://doi.org/10.1145/3379156.3391346","url":null,"abstract":"In this work we evaluate neural networks, support vector machines and decision trees for the regression of the center of the eyeball and the optical vector based on the pupil ellipse. In the evaluation we analyze single ellipses as well as window-based approaches as input. Comparisons are made regarding accuracy and runtime. The evaluation gives an overview of the general expected accuracy with different models and amounts of input ellipses. A simulator was implemented for the generation of the training and evaluation data. For a visual evaluation and to push the state of the art in optical vector estimation, the best model was applied to real data. This real data came from public data sets in which the ellipse is already annotated by an algorithm. The optical vectors on real data and the generator are made publicly available. Link to the generator and models.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114904722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Hildebrandt, Jens-Patrick Langstrand, H. Nguyen
{"title":"Synopticon: Sensor Fusion for Automated Gaze Analysis","authors":"Michael Hildebrandt, Jens-Patrick Langstrand, H. Nguyen","doi":"10.1145/3379157.3391986","DOIUrl":"https://doi.org/10.1145/3379157.3391986","url":null,"abstract":"This demonstration presents Synopticon, an open-source software system for automatic, real-time gaze object detection for mobile eye tracking. The system merges gaze data from eye tracking glasses with position data from a motion capture system and projects the resulting gaze vector onto a 3D model of the environment.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126978752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Munz, Noel Schaefer, Tanja Blascheck, K. Kurzhals, E. Zhang, D. Weiskopf
{"title":"Demo of a Visual Gaze Analysis System for Virtual Board Games","authors":"T. Munz, Noel Schaefer, Tanja Blascheck, K. Kurzhals, E. Zhang, D. Weiskopf","doi":"10.1145/3379157.3391985","DOIUrl":"https://doi.org/10.1145/3379157.3391985","url":null,"abstract":"We demonstrate a system for the visual analysis of eye movement data of competitive and collaborative virtual board games two persons play. Our approach uses methods to temporally synchronize and spatially register gaze and mouse recordings from two possibly different eye tracking devices. Analysts can then examine such fused data with a combination of visualizations. We demonstrate our methods for the competitive game Go, which is especially complex for the analysis of strategies of individual players.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127186123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaze - grabber distance in expert and novice forest machine operators: the effects of automatic boom control","authors":"Jani Koskinen, R. Bednarik","doi":"10.1145/3379157.3391414","DOIUrl":"https://doi.org/10.1145/3379157.3391414","url":null,"abstract":"Eye-hand coordination is a central skill in both everyday and expert visuo-motor tasks. In forest machine cockpits during harvest, the operators need to perform the eye-hand coordination and spatial navigation effectively to control the boom of the forwarder smoothly and quickly to achieve high performance. Because it is largely unknown how this skill is acquired, we conducted a first eye-tracking study of its kind to uncover the strategies expert and novice operators use. In an authentic training situation, both groups used an industry standard machine with- and without intelligent boom control support, and we measured their gaze and boom control strategies.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124766477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing Transferability of Happiness Detection via Gaze Tracking in Multimedia Applications","authors":"David Bethge, L. Chuang, T. Große-Puppendahl","doi":"10.1145/3379157.3391655","DOIUrl":"https://doi.org/10.1145/3379157.3391655","url":null,"abstract":"How are strong positive affective states related to eye-tracking features and how can they be used to appropriately enhance well-being in multimedia consumption? In this paper, we propose a robust classification algorithm for predicting strong happy emotions from a large set of features acquired from wearable eye-tracking glasses. We evaluate the potential transferability across subjects and provide a model-agnostic interpretable feature importance metric. Our proposed algorithm achieves a true-positive-rate of 70% while keeping a low false-positive-rate of 10% with extracted features of the pupil diameter as most important features.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123471746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implications of Eye Tracking Research to Cinematic Virtual Reality","authors":"Sylvia Rothe, L. Chuang","doi":"10.1145/3379157.3391658","DOIUrl":"https://doi.org/10.1145/3379157.3391658","url":null,"abstract":"While watching omnidirectional movies via head-mounted displays the viewer has an immersive viewing experience. Turning the head and looking around is a natural input technique to choose the visible part of the movie. For realizing scene changes depending on the viewing direction and for implementing non-linear story structures in cinematic virtual reality (CVR), selection methods are required to select the story branches. The input device should not disturb the viewing experience and the viewer should not be primarily aware of it. Eye- and head-based methods do not need additional devices and seem to be especially suitable. We investigate several techniques by using an own tool for analysing head and eye tracking data in CVR.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131558000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spontaneous Gaze Gesture Interaction in the Presence of Noises and Various Types of Eye Movements","authors":"S. Wibirama, Suatmi Murnani, N. A. Setiawan","doi":"10.1145/3379156.3391363","DOIUrl":"https://doi.org/10.1145/3379156.3391363","url":null,"abstract":"Gaze gesture is a desirable technique for a spontaneous and pervasive gaze interaction due to its insensitivity to spatial accuracy. Unfortunately, gaze gesture-based object selection utilizing correlation coefficient is prone to a low object selection accuracy due to presence of noises. In addition, effect of various types of eye movements that present in gaze gesture-based object selection has not been tackled properly. To overcome these problems, we propose a denoising method for gaze gesture-based object selection using First Order IIR Filter and an event detection method based on the Hidden Markov Model. The experimental results show that the proposed method yielded the best object selection accuracy of . The result suggests that a spontaneous gaze gesture-based object selection is feasible to be developed in the presence of noises and various types of eye movements.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126265535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}