A. Biswas, Kamran Binaee, Kaylie Jacleen Capurro, M. Lescroart
{"title":"Characterizing the Performance of Deep Neural Networks for Eye-Tracking","authors":"A. Biswas, Kamran Binaee, Kaylie Jacleen Capurro, M. Lescroart","doi":"10.1145/3450341.3458491","DOIUrl":"https://doi.org/10.1145/3450341.3458491","url":null,"abstract":"Deep neural networks (DNNs) provide powerful tools to identify and track features of interest, and have recently come into use for eye-tracking. Here, we test the ability of a DNN to predict keypoints localizing the eyelid and pupil under the types of challenging image variability that occur in mobile eye-tracking. We simulate varying degrees of perturbation for five common sources of image variation in mobile eye-tracking: rotations, blur, exposure, reflection, and compression artifacts. To compare the relative performance decrease across domains in a common space of image variation, we used features derived from a DNN (ResNet50) to compute the distance of each perturbed video from the videos used to train our DNN. We found that increasing cosine distance from the training distribution was associated with monotonic decreases in model performance in all domains. These results suggest ways to optimize the selection of diverse images for model training.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"1614 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134288314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasmeen Abdrabou, A. Shams, Mohamed Mantawey, A. A. Khan, M. Khamis, Florian Alt, Yomna Abdelrahman, Anam Ahmad
{"title":"GazeMeter: Exploring the Usage of Gaze Behaviour to Enhance Password Assessments","authors":"Yasmeen Abdrabou, A. Shams, Mohamed Mantawey, A. A. Khan, M. Khamis, Florian Alt, Yomna Abdelrahman, Anam Ahmad","doi":"10.1145/3448017.3457384","DOIUrl":"https://doi.org/10.1145/3448017.3457384","url":null,"abstract":"We investigate the use of gaze behaviour as a means to assess password strength as perceived by users. We contribute to the effort of making users choose passwords that are robust against guessing-attacks. Our particular idea is to consider also the users’ understanding of password strength in security mechanisms. We demonstrate how eye tracking can enable this: by analysing people’s gaze behaviour during password creation, its strength can be determined. To demonstrate the feasibility of this approach, we present a proof of concept study (N = 15) in which we asked participants to create weak and strong passwords. Our findings reveal that it is possible to estimate password strength from gaze behaviour with an accuracy of 86% using Machine Learning. Thus, we enable research on novel interfaces that consider users’ understanding with the ultimate goal of making users choose stronger passwords.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121097051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Deeper Analysis of AOI Coverage in Code Reading","authors":"T. Busjahn, S. Tamm","doi":"10.1145/3448018.3457422","DOIUrl":"https://doi.org/10.1145/3448018.3457422","url":null,"abstract":"The proportion of areas of interest that are covered with gaze is employed as metric to compare natural-language text and source code reading, as well as novice and expert programmers’ code reading behavior. Two levels of abstraction are considered for AOIs: lines and elements. AOI coverage is significantly higher on natural-language text than on code, so a detailed account is provided on the areas that are skipped. Between novice and expert programmers, the overall AOI coverage is comparable. However, segmenting the stimuli into meaningful components revealed that they distribute their gaze differently and partly look at different AOIs. Thus, while programming expertise does not strongly influence AOI coverage quantitatively, it does so qualitatively.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124038037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementing Eye-Tracking for Persona Analytics","authors":"Soon-Gyo Jung, Joni O. Salminen, B. Jansen","doi":"10.1145/3450341.3458765","DOIUrl":"https://doi.org/10.1145/3450341.3458765","url":null,"abstract":"Investigating users’ engagement with interactive persona systems can yield crucial insights for the design of such systems. Using eye-tracking, researchers can address the scarcity of behavioral user studies, even during times when physical user studies are difficult or impossible to carry out. In this research, we implement a webcam-based eye-tracking module into an interactive persona system, facilitating remote user studies. Findings from the implementation can show what information users pay attention to in persona profiles.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121408816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Noise in the Machine: Sources of Physical and Computation Error in Eye Tracking with Pupil Core Wearable Eye Tracker: Wearable Eye Tracker Noise in Natural Motion Experiments","authors":"A. Velisar, N. Shanidze","doi":"10.1145/3450341.3458495","DOIUrl":"https://doi.org/10.1145/3450341.3458495","url":null,"abstract":"Developments in wearable eye tracking devices make them an attractive solution for studies of eye movements during naturalistic head/body motion. However, before these systems’ potential can be fully realized, a thorough assessment of potential sources of error is needed. In this study, we examine three possible sources for the Pupil Core eye tracking goggles: camera motion during head/body motion, choice of calibration marker configuration, and eye movement estimation. In our data, we find that up to 36% of reported eye motion may be attributable to camera movement; choice of appropriate calibration routine is essential for minimizing error; and the use of a secondary calibration for eye position remapping can improve eye position errors estimated from the eye tracker.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126508696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. A. Madi, Drew T. Guarnera, Bonita Sharif, Jonathan I. Maletic
{"title":"EMIP Toolkit: A Python Library for Customized Post-processing of the Eye Movements in Programming Dataset","authors":"N. A. Madi, Drew T. Guarnera, Bonita Sharif, Jonathan I. Maletic","doi":"10.1145/3448018.3457425","DOIUrl":"https://doi.org/10.1145/3448018.3457425","url":null,"abstract":"The use of eye tracking in the study of program comprehension in software engineering allows researchers to gain a better understanding of the strategies and processes applied by programmers. Despite the large number of eye tracking studies in software engineering, very few datasets are publicly available. The existence of the large Eye Movements in Programming Dataset (EMIP) opens the door for new studies and makes reproducibility of existing research easier. In this paper, a Python library (the EMIP Toolkit) for customized post-processing of the EMIP dataset is presented. The toolkit is specifically designed to make using the EMIP dataset easier and more accessible. It implements features for fixation detection and correction, trial visualization, source code lexical data enrichment, and mapping fixation data over areas of interest. In addition to the toolkit, a filtered token-level dataset with scored recording quality is presented for all Java trials (accounting for 95.8% of the data) in the EMIP dataset.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115555235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gaze and Heart Rate Synchronization in Computer-Mediated Collaboration","authors":"K. Wisiecka","doi":"10.1145/3450341.3457992","DOIUrl":"https://doi.org/10.1145/3450341.3457992","url":null,"abstract":"Computer-mediated collaboration has become an integral part of our every day functioning. Despite decreased non-verbal communication and face-to-face contact with partners of collaboration, people learned how to remotely work together. The consequences of decreased non-verbal signals such as gaze communication in remote collaboration are however not fully investigated. In a series of three experiments, we propose solutions to enhance quality of remote collaboration. The present paper is focused on examining the relation between gaze and heart reaction during face-to-face and remote collaboration.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114089783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-Based Projection Labeling for Mobile Eye Tracking","authors":"K. Kurzhals","doi":"10.1145/3448017.3457382","DOIUrl":"https://doi.org/10.1145/3448017.3457382","url":null,"abstract":"The annotation of gaze data concerning investigated areas of interest (AOIs) poses a time-consuming step in the analysis procedure of eye tracking experiments. For data from mobile eye tracking glasses, the annotation effort is further increased because each recording has to be investigated individually. Automated approaches based on supervised machine learning require pre-trained categories which are hard to obtain without human interpretation, i.e., labeling ground truth data. We present an interactive visualization approach that supports efficient annotation of gaze data based on image content participants with eye tracking glasses focused on. Recordings can be segmented individually to reduce the annotation effort. Thumbnails represent segments visually and are projected on a 2D plane for a fast comparison of AOIs. Annotated scanpaths can then be interpreted directly with the timeline visualization. We showcase our approach with three different scenarios.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114387985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aunnoy K. Mutasim, Anil Ufuk Batmaz, W. Stuerzlinger
{"title":"Pinch, Click, or Dwell: Comparing Different Selection Techniques for Eye-Gaze-Based Pointing in Virtual Reality","authors":"Aunnoy K. Mutasim, Anil Ufuk Batmaz, W. Stuerzlinger","doi":"10.1145/3448018.3457998","DOIUrl":"https://doi.org/10.1145/3448018.3457998","url":null,"abstract":"While a pinch action is gaining popularity for selection of virtual objects in eye-gaze-based systems, it is still unknown how well this method performs compared to other popular alternatives, e.g., a button click or a dwell action. To determine pinch’s performance in terms of execution time, error rate, and throughput, we implemented a Fitts’ law task in Virtual Reality (VR) where the subjects pointed with their (eye-)gaze and selected / activated the targets by pinch, clicking a button, or dwell. Results revealed that although pinch was slower, made more errors, and had less throughput compared to button clicks, none of these differences were significant. Dwell exhibited the least errors but was significantly slower and achieved less throughput compared to the other conditions. Based on these findings, we conclude that the pinch gesture is a reasonable alternative to button clicks for eye-gaze-based VR systems.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130125507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithmic gaze classification for mobile eye-tracking","authors":"Daniel Müller, David Mann","doi":"10.1145/3450341.3458886","DOIUrl":"https://doi.org/10.1145/3450341.3458886","url":null,"abstract":"Mobile eye tracking traditionally requires gaze to be coded manually. We introduce an open-source Python package (GazeClassify) that algorithmically annotates mobile eye tracking data for the study of human interactions. Instead of manually identifying objects and identifying if gaze is directed towards an area of interest, computer vision algorithms are used for the identification and segmentation of human bodies. To validate the algorithm, mobile eye tracking data from short combat sport sequences were analyzed. The performance of the algorithm was compared against three manual raters. The algorithm performed with substantial reliability in comparison to the manual raters when it came to annotating which area of interest gaze was closest to. However, the algorithm was more conservative than the manual raters for classifying if gaze was directed towards an object of interest. The algorithmic approach represents a viable and promising means for automating gaze classification for mobile eye tracking.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"1047 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}