{"title":"Understanding the relationship between microsaccades and pupil dilation","authors":"Sudeep Raj, Chia-Chien Wu, Shreya Raj, Nada Attar","doi":"10.1145/3314111.3323076","DOIUrl":"https://doi.org/10.1145/3314111.3323076","url":null,"abstract":"Existing literature reveals little information about the relationship between microsaccade rate and the average change in pupil size. There is a need to investigate this relationship and how the microsaccade rate may be relevant to cognitive load. In our study, we compared the microsaccade rate to the average change in pupil size during eight experimental conditions. Four of them were considered fixation conditions (subjects look at a fixation cross in each visual scene) and four were free-viewing conditions (subjects are free to move their eyes over the visual scene). We analyzed the change in pupil size and microsaccade rate for the first part of each task and as well as the entire task in all conditions. We discovered a significant correlation between the microsaccade rate and the average change in pupil size during the first part of each task, and comparable characteristics throughout the entire task. Then we measured the data for only one of the experimental conditions in free-viewing that involves a search task to understand comparable characteristics related to cognitive load. We found that there is a correlation between the microsaccade and pupil data. We hope that this finding will help further the understanding of the relative function of microsaccades and use it to support cognitive load response and pupil measurement.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121558490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Mardanbegi, Christopher Clarke, Hans-Werner Gellersen
{"title":"Monocular gaze depth estimation using the vestibulo-ocular reflex","authors":"D. Mardanbegi, Christopher Clarke, Hans-Werner Gellersen","doi":"10.1145/3314111.3319822","DOIUrl":"https://doi.org/10.1145/3314111.3319822","url":null,"abstract":"Gaze depth estimation presents a challenge for eye tracking in 3D. This work investigates a novel approach to the problem based on eye movement mediated by the vestibulo-ocular reflex (VOR). VOR stabilises gaze on a target during head movement, with eye movement in the opposite direction, and the VOR gain increases the closer the fixated target is to the viewer. We present a theoretical analysis of the relationship between VOR gain and depth which we investigate with empirical data collected in a user study (N=10). We show that VOR gain can be captured using pupil centres, and propose and evaluate a practical method for gaze depth estimation based on a generic function of VOR gain and two-point depth calibration. The results show that VOR gain is comparable with vergence in capturing depth while only requiring one eye, and provide insight into open challenges in harnessing VOR gain as a robust measure.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132711580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Duchowski, S. Jörg, Jaret Screws, Nina A. Gehrer, M. Schönenberg, Krzysztof Krejtz
{"title":"Guiding gaze: expressive models of reading and face scanning","authors":"A. Duchowski, S. Jörg, Jaret Screws, Nina A. Gehrer, M. Schönenberg, Krzysztof Krejtz","doi":"10.1145/3314111.3319848","DOIUrl":"https://doi.org/10.1145/3314111.3319848","url":null,"abstract":"We evaluate subtle, emotionally-driven models of eye movement animation. Two models are tested, reading and face scanning, each based on recorded gaze transition probabilities. For reading, simulated emotional mood is governed by the probability density function that varies word advancement, i.e., re-fixations, forward, or backward skips. For face scanning, gaze behavior depends on task (gender or emotion discrimination) or the facial emotion portrayed. Probability density functions in both cases are derived from empirically observed transitions that significantly alter viewing behavior, captured either during mood-induced reading or during scanning faces expressing different emotions. A perceptual study shows that viewers can distinguish between reading and face scanning eye movements. However, viewers could not gauge the emotional valence of animated eye motion. For animation, our contribution shows that simulated emotionally-driven viewing behavior is too subtle to be discerned, or it needs to be exaggerated to be effective.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130514484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Get a grip: slippage-robust and glint-free gaze estimation for real-time pervasive head-mounted eye tracking","authors":"Thiago Santini, D. Niehorster, Enkelejda Kasneci","doi":"10.1145/3314111.3319835","DOIUrl":"https://doi.org/10.1145/3314111.3319835","url":null,"abstract":"A key assumption conventionally made by flexible head-mounted eye-tracking systems is often invalid: The eye center does not remain stationary w.r.t. the eye camera due to slippage. For instance, eye-tracker slippage might happen due to head acceleration or explicit adjustments by the user. As a result, gaze estimation accuracy can be significantly reduced. In this work, we propose Grip, a novel gaze estimation method capable of instantaneously compensating for eye-tracker slippage without additional hardware requirements such as glints or stereo eye camera setups. Grip was evaluated using previously collected data from a large scale unconstrained pervasive eye-tracking study. Our results indicate significant slippage compensation potential, decreasing average participant median angular offset by more than 43% w.r.t. a non-slippage-robust gaze estimation method. A reference implementation of Grip was integrated into EyeRecToo, an open-source hardware-agnostic eye-tracking software, thus making it readily accessible for multiple eye trackers (Available at: www.ti.uni-tuebingen.de/perception).","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130547785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When you don't see what you expect: incongruence in music and source code reading","authors":"Natalia Chitalkina","doi":"10.1145/3314111.3322866","DOIUrl":"https://doi.org/10.1145/3314111.3322866","url":null,"abstract":"Both musicians and programmers have expectations when they read music scores or source code. The goal of these studies is to get an insight into what will happen when these expectations are violated in familiar tasks. In music reading study, we explored eye movements of musically experienced participants singing and playing on a piano familiar melodies either containing or not containing a bar shifted down a tone in two different keys. First-pass fixation durations, the mean pupil size during first-pass fixations and eye-time span parameters were analysed using linear mixed models. All three parameters can provide useful information on the processing of incongruence in music. Furthermore, the pupil size parameter might be sensitive to the modality of performance. In the code reading study, we plan to explore incongruence in familiar code tasks and its reflection in eye movements of programmers.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133927512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interaction graphs: visual analysis of eye movement data from interactive stimuli","authors":"Michael Burch","doi":"10.1145/3317960.3321617","DOIUrl":"https://doi.org/10.1145/3317960.3321617","url":null,"abstract":"Eye tracking studies have been conducted to understand the visual attention in different scenarios like, for example, how people read text, which graphical elements in a visualization are frequently attended, how they drive a car, or how they behave during a shopping task. All of these scenarios - either static or dynamic - show a visual stimulus in which the spectators are not able to change the visual content they see. This is different if interaction is allowed like in (graphical) user interfaces (UIs), integrated development environments (IDEs), dynamic web pages (with different user-defined states), or interactive displays in general as in human-computer interaction, which gives a viewer the opportunity to actively change the stimulus content. Typically, for the analysis and visualization of time-varying visual attention paid to a web page, there is a big difference for the analytics and visualization approaches - algorithmically as well as visually - if the presented web page stimulus is static or dynamic, i.e. time-varying, or dynamic in the sense that user interaction is allowed. In this paper we discuss the challenges for visual analysis concepts in order to analyze the recorded data, in particular, with the goal to improve interactive stimuli, i.e., the layout of a web page, but also the interaction concept. We describe a data model which leads to interaction graphs, a possible way to analyze and visualize this kind of eye movement data.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132114924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Boosting speed- and accuracy of gradient based dark pupil tracking using vectorization and differential evolution","authors":"A. Krause, K. Essig","doi":"10.1145/3314111.3319849","DOIUrl":"https://doi.org/10.1145/3314111.3319849","url":null,"abstract":"Gradient based dark pupil tracking [Timm and Barth 2011] is a simple and robust algorithm for pupil center estimation. The algorithm's time complexity of O(n4) can be tackled by applying a two-stage process (coarse center estimation followed by a windowed refinement), as well as by optimizing and parallelizing code using cache-friendly data structures, vector-extensions of modern CPU's and GPU acceleration. We could achieve a substantial speed up compared to a non-optimized implementation: 12x using vector extensions and 65x using a GPU. Further, the two-stage process combined with parameter optimization using differential evolution considerably increased the accuracy of the algorithm. We evaluated our implementation using the \"Labelled pupils the wild\" data set. The percentage of frames with a pixel error below 15px increased from 28% to 72%, surpassing algorithmically more complex algorithms like ExCuse (64%) and catching up with recent algorithms like PuRe (87%).","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123297166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using mutual distance plot and warped time distance chart to compare scan-paths of multiple observers","authors":"P. Kasprowski, Katarzyna Harężlak","doi":"10.1145/3317958.3318226","DOIUrl":"https://doi.org/10.1145/3317958.3318226","url":null,"abstract":"The aim of the research is the introduction of new techniques that enable a visual comparison of scan-paths. Every eye tracking experiment produces many scan-paths, and one of the main challenges of eye tracking analysis is how two compare these scan-paths. A classic solution is to extract easily measurable features such as fixation durations or saccade lengths. There are also many more sophisticated techniques that compare two scan-paths using only spatial or both spatial and temporal information. These techniques typically return a value (or several values) that may be used as scan-path similarity/distance measure. However, there is still a lack of widely adopted methods that offer not only the measure but enable a visual comparison of scan-paths. The paper introduces two possible options: the Mutual Distance Plot for two scan-paths and the Warped Time Distance chart for the comparison of the theoretically unlimited number of scan-paths. It is shown that these visualizations may reveal information about relationships between two or more scan-paths on straightforward charts. The informativeness of the solution is analyzed using both artificial and real data.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122378087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentin Bruder, K. Kurzhals, S. Frey, D. Weiskopf, T. Ertl
{"title":"Space-time volume visualization of gaze and stimulus","authors":"Valentin Bruder, K. Kurzhals, S. Frey, D. Weiskopf, T. Ertl","doi":"10.1145/3314111.3319812","DOIUrl":"https://doi.org/10.1145/3314111.3319812","url":null,"abstract":"We present a method for the spatio-temporal analysis of gaze data from multiple participants in the context of a video stimulus. For such data, an overview of the recorded patterns is important to identify common viewing behavior (such as attentional synchrony) and outliers. We adopt the approach of space-time cube visualization, which extends the spatial dimensions of the stimulus by time as the third dimension. Previous work mainly handled eye tracking data in the space-time cube as point cloud, providing no information about the stimulus context. This paper presents a novel visualization technique that combines gaze data, a dynamic stimulus, and optical flow with volume rendering to derive an overview of the data with contextual information. With specifically designed transfer functions, we emphasize different data aspects, making the visualization suitable for explorative analysis and for illustrative support of statistical findings alike.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121099553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Korok Sengupta, Raphael Menges, C. Kumar, Steffen Staab
{"title":"Impact of variable positioning of text prediction in gaze-based text entry","authors":"Korok Sengupta, Raphael Menges, C. Kumar, Steffen Staab","doi":"10.1145/3317956.3318152","DOIUrl":"https://doi.org/10.1145/3317956.3318152","url":null,"abstract":"Text predictions play an important role in improving the performance of gaze-based text entry systems. However, visual search, scanning, and selection of text predictions require a shift in the user's attention from the keyboard layout. Hence the spatial positioning of predictions becomes an imperative aspect of the end-user experience. In this work, we investigate the role of spatial positioning by comparing the performance of three different keyboards entailing variable positions for text predictions. The experiment result shows no significant differences in the text entry performance, i.e., displaying suggestions closer to visual fovea did not enhance the text entry rate of participants, however they used more keystrokes and backspace. This implies to the inessential usage of suggestions when it is in the constant visual attention of users, resulting in increased cost of correction. Furthermore, we argue that the fast saccadic eye movements undermines the spatial distance optimization in prediction positioning.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129835517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}