{"title":"SynchronEyes: A Novel, Paired Data Set of Eye Movements Recorded Simultaneously with Remote and Wearable Eye-Tracking Devices","authors":"Samantha Aziz, D. Lohr, Oleg V. Komogortsev","doi":"10.1145/3517031.3532522","DOIUrl":"https://doi.org/10.1145/3517031.3532522","url":null,"abstract":"Comparing the performance of new eye-tracking devices against an established benchmark is vital for identifying differences in the way eye movements are reported by each device. This paper introduces a new paired data set comprised of eye movement recordings captured simultaneously with both the EyeLink 1000—considered the “gold standard” in eye-tracking research studies—and the recently released AdHawk MindLink eye tracker. Our work presents a methodology for simultaneous data collection and a comparison of the resulting eye-tracking signal quality achieved by each device.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"100 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114085599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. S. Arefin, J. Swan, R. C. Hoffing, Steven M. Thurman
{"title":"Estimating Perceptual Depth Changes with Eye Vergence and Interpupillary Distance using an Eye Tracker in Virtual Reality","authors":"M. S. Arefin, J. Swan, R. C. Hoffing, Steven M. Thurman","doi":"10.1145/3517031.3529632","DOIUrl":"https://doi.org/10.1145/3517031.3529632","url":null,"abstract":"Virtual Reality (VR) technology has advanced to include eye-tracking, allowing novel research, such as investigating how our visual system coordinates eye movements with changes in perceptual depth. The purpose of this study was to examine whether eye tracking could track perceptual depth changes during a visual discrimination task. We derived two depth-dependent variables from eye tracker data: eye vergence angle (EVA) and interpupillary distance (IPD). As hypothesized, our results revealed that shifting gaze from near-to-far depth significantly decreased EVA and increased IPD, while the opposite pattern was observed while shifting from far-to-near. Importantly, the amount of change in these variables tracked closely with relative changes in perceptual depth, and supported the hypothesis that eye tracker data may be used to infer real-time changes in perceptual depth in VR. Our method could be used as a new tool to adaptively render information based on depth and improve the VR user experience.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114783084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agata Rodziewicz-Cybulska, Krzysztof Krejtz, A. Duchowski, I. Krejtz
{"title":"Measuring Cognitive Effort with Pupillary Activity and Fixational Eye Movements When Reading: Longitudinal Comparison of Children With and Without Primary Music Education","authors":"Agata Rodziewicz-Cybulska, Krzysztof Krejtz, A. Duchowski, I. Krejtz","doi":"10.1145/3517031.3529636","DOIUrl":"https://doi.org/10.1145/3517031.3529636","url":null,"abstract":"This article evaluates the Low/High Index of Pupillary Activity (LHIPA), a measure of cognitive effort based on pupil response, in the context of reading. At the beginning of 2nd and 3rd grade, 107 children (8-9 y.o.) from music and general primary school were asked to read 40 sentences with keywords differing in length and frequency while their eye movements were recorded. Sentences with low frequency or long keywords received more attention than sentences with high frequent or short keywords. The word frequency and length effects were more pronounced in younger children. At the 2nd grade, music children dwelt less on sentences with short frequent keywords than on sentences with long frequent keywords. As expected LHIPA decreased over sentences with low frequency short keywords suggesting more cognitive effort at earlier stages of reading ability. This finding shows the utility of LHIPA as a measure of cognitive effort in education.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130662940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moritz Langner, N. Aßfalg, Peyman Toreini, A. Maedche
{"title":"EyeLikert: Eye-based Interactions for Answering Surveys","authors":"Moritz Langner, N. Aßfalg, Peyman Toreini, A. Maedche","doi":"10.1145/3517031.3529776","DOIUrl":"https://doi.org/10.1145/3517031.3529776","url":null,"abstract":"Surveys are a widely used method for data collection from participants. However, responding to surveys is a time consuming task and requires cognitive and physical efforts of the participants. Eye-based interactions offer the advantage of high speed pointing, low physical effort and implicitness. These advantages are already successfully leveraged in different domains, but so far not investigated in supporting participants in responding to surveys. In this paper, we present EyeLikert, a tool that enables users to answer Likert-scale questions in surveys with their eyes. EyeLikert integrates three different eye-based interactions considering the Midas Touch problem. We hypothesize that enabling eye-based interactions to fill out surveys offers the potential to reduce the physical effort, increase the speed of responding questions, and thereby reduce drop-out rates.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128897466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Consider the Head Movements! Saccade Computation in Mobile Eye-Tracking","authors":"Negar Alinaghi, Ioannis Giannopoulos","doi":"10.1145/3517031.3529624","DOIUrl":"https://doi.org/10.1145/3517031.3529624","url":null,"abstract":"Saccadic eye movements are known to serve as a suitable proxy for tasks prediction. In mobile eye-tracking, saccadic events are strongly influenced by head movements. Common attempts to compensate for head-movement effects either neglect saccadic events altogether or fuse gaze and head-movement signals measured by IMUs in order to simulate the gaze signal at head-level. Using image processing techniques, we propose a solution for computing saccades based on frames of the scene-camera video. In this method, fixations are first detected based on gaze positions specified in the coordinate system of each frame, and then respective frames are merged. Lastly, pairs of consecutive fixations –forming a saccade- are projected into the coordinate system of the stitched image using the homography matrices computed by the stitching algorithm. The results show a significant difference in length between projected and original saccades, and approximately 37% of error introduced by employing saccades without head-movement consideration.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132311142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-User Eye-Tracking","authors":"Bhanuka Mahanama","doi":"10.1145/3517031.3532197","DOIUrl":"https://doi.org/10.1145/3517031.3532197","url":null,"abstract":"The human gaze characteristics provide informative cues on human behavior during various activities. Using traditional eye trackers, assessing gaze characteristics in the wild requires a dedicated device per participant and therefore is not feasible for large-scale experiments. In this study, we propose a commodity hardware-based multi-user eye-tracking system. We leverage the recent advancements in Deep Neural Networks and large-scale datasets for implementing our system. Our preliminary studies provide promising results for multi-user eye-tracking on commodity hardware, providing a cost-effective solution for large-scale studies.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129808919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Usability of the super-vowel for gaze-based text entry","authors":"J. Matulewski, M. Patera","doi":"10.1145/3517031.3529231","DOIUrl":"https://doi.org/10.1145/3517031.3529231","url":null,"abstract":"We tested experimentally the idea of reducing the number of buttons in the gaze-based text entry system by replacing all vowels with a single diamond character, which we call super-vowel. It is inspired by historical optimizations of the written language, like Abjar. This way, the number of items on the screen was reduced, simplifying text input and allowing to make the buttons larger. However, the modification can also be a distractor that increases the number of errors. As a result of an experiment on 29 people, it turned out that in the case of non-standard methods of entering text, the modification slightly increases the speed of entering the text and reduces the number of errors. However, this does not apply to the standard keyboard, a direct transformation of physical computer keyboards with a Qwerty button layout.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter A. Smith, Matt Dombrowski, Shea McLinden, Calvin MacDonald, Devon Lynn, John Sparkman, Dominique Courbin, Albert Manero
{"title":"Advancing dignity for adaptive wheelchair users via a hybrid eye tracking and electromyography training game","authors":"Peter A. Smith, Matt Dombrowski, Shea McLinden, Calvin MacDonald, Devon Lynn, John Sparkman, Dominique Courbin, Albert Manero","doi":"10.1145/3517031.3529612","DOIUrl":"https://doi.org/10.1145/3517031.3529612","url":null,"abstract":"Maintaining autonomous activities can be challenging for patients with neuromuscular disorders or quadriplegia, where control of joysticks for powered wheelchairs may not be feasible. Advancements in human machine interfaces have resulted in methods to capture the intentionality of the individual through non-traditional controls and communicating the users desires to a robotic interface. This research explores the design of a training game that teaches users to control a wheelchair through such a device that utilizes electromyography (EMG). The training game combines the use of EMG and eye tracking to enhance the impression of dignity while building self-efficacy and supporting autonomy for users. The system implements both eye tracking and surface electromyography, via the temporalis muscles, for gamified training and simulation of a novel wheelchair interface.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114216597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User Perception of Smooth Pursuit Target Speed","authors":"Heiko Drewes, Sophia Sakel, H. Hussmann","doi":"10.1145/3517031.3529234","DOIUrl":"https://doi.org/10.1145/3517031.3529234","url":null,"abstract":"Gaze-aware interfaces should work on all display sizes. This paper researches whether angular velocity or tangential speed should be kept when scaling a gaze-aware interface based on circular smooth pursuits to another display size. We also address the question of which target speed and which trajectory size feels most comfortable for the users. We present the results of a user study where the participants were asked how they perceived the speed and the radius of a circular moving smooth pursuit target. The data show that the users’ judgment of the optimal speed corresponds with an optimal detection rate. The results also enable us to give an optimal value pair for target speed and trajectory radius. Additionally, we give a functional relation on how to adapt the target speed when scaling the geometry to keep optimal detection rate and user experience.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121758770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Prasse, D. R. Reich, Silvia Makowski, L. Jäger, T. Scheffer
{"title":"Fairness in Oculomotoric Biometric Identification","authors":"Paul Prasse, D. R. Reich, Silvia Makowski, L. Jäger, T. Scheffer","doi":"10.1145/3517031.3529633","DOIUrl":"https://doi.org/10.1145/3517031.3529633","url":null,"abstract":"Gaze patterns are known to be highly individual, and therefore eye movements can serve as a biometric characteristic. We explore aspects of the fairness of biometric identification based on gaze patterns. We find that while oculomotoric identification does not favor any particular gender and does not significantly favor by age range, it is unfair with respect to ethnicity. Moreover, fairness concerning ethnicity cannot be achieved by balancing the training data for the best-performing model.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"10884 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116840720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}