{"title":"Influence of stimulus and viewing task types on a learning-based visual saliency model","authors":"Binbin Ye, Yusuke Sugano, Yoichi Sato","doi":"10.1145/2578153.2578199","DOIUrl":"https://doi.org/10.1145/2578153.2578199","url":null,"abstract":"Learning-based approaches using actual human gaze data have been proven to be an efficient way to acquire accurate visual saliency models and attracted much interest in recent years. However, it still remains yet to be answered how different types of stimulus (e.g., fractal images, and natural images with or without human faces) and viewing tasks (e.g., free viewing or a preference rating task) affect learned visual saliency models. In this study, we quantitatively investigate how learned saliency models differ when using datasets collected in different settings (image contextual level and viewing task) and discuss the importance of choosing appropriate experimental settings.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126790017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas C. Kübler, Enkelejda Kasneci, W. Rosenstiel
{"title":"SubsMatch: scanpath similarity in dynamic scenes based on subsequence frequencies","authors":"Thomas C. Kübler, Enkelejda Kasneci, W. Rosenstiel","doi":"10.1145/2578153.2578206","DOIUrl":"https://doi.org/10.1145/2578153.2578206","url":null,"abstract":"The analysis of visual scanpaths, i.e., series of fixations and saccades, in complex dynamic scenarios is highly challenging and usually performed manually. We propose SubsMatch, a scanpath comparison algorithm for dynamic, interactive scenarios based on the frequency of repeated gaze patterns. Instead of measuring the gaze duration towards a semantic target object (which would be hard to label in dynamic scenes), we examine the frequency of attention shifts and exploratory eye movements. SubsMatch was evaluated on highly dynamic data from a driving experiment to identify differences between scanpaths of subjects who failed a driving test and subjects who passed.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116428695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attentional processes in natural reading: the effect of margin annotations on reading behaviour and comprehension","authors":"Andrea Mazzei, T. Koll, F. Kaplan, P. Dillenbourg","doi":"10.1145/2578153.2578195","DOIUrl":"https://doi.org/10.1145/2578153.2578195","url":null,"abstract":"We present an eye tracking study to investigate how natural reading behavior and reading comprehension are influenced by in-context annotations. In a lab experiment, three groups of participants were asked to read a text and answer comprehension questions: a control group without taking annotations, a second group reading and taking annotations, and a third group reading a peer-annotated version of the same text. A self-made head-mounted eye tracking system was specifically designed for this experiment, in order to study how learners read and quickly re-read annotated paper texts, in low constrained experimental conditions. In the analysis, we measured the phenomenon of annotation-induced overt attention shifts in reading, and found that: (1) the reader's attention shifts toward a margin annotation more often when the annotation lies in the early peripheral vision, and (2) the number of attention shifts, between two different types of information units, is positively related to comprehension performance in quick re-reading. These results can be translated into potential criteria for knowledge assessment systems.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133749221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Look and lean: accurate head-assisted eye pointing","authors":"O. Špakov, Poika Isokoski, P. Majaranta","doi":"10.1145/2578153.2578157","DOIUrl":"https://doi.org/10.1145/2578153.2578157","url":null,"abstract":"Compared to the mouse, eye pointing is inaccurate. As a consequence, small objects are difficult to point by gaze alone. We suggest using a combination of eye pointing and subtle head movements to achieve accurate hands-free pointing in a conventional desktop computing environment. For tracking the head movements, we exploited information of the eye position in the eye tracker's camera view. We conducted a series of three experiments to study the potential caveats and benefits of using head movements to adjust gaze cursor position. Results showed that head-assisted eye pointing significantly improves the pointing accuracy without a negative impact on the pointing time. In some cases participants were able to point almost 3 times closer to the target's center, compared to the eye pointing alone (7 vs. 19 pixels). We conclude that head assisted eye pointing is a comfortable and potentially very efficient alternative for other assisting methods in the eye pointing, such as zooming.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128660986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software framework for an ocular biometric system","authors":"C. Holland, Oleg V. Komogortsev","doi":"10.1145/2578153.2582174","DOIUrl":"https://doi.org/10.1145/2578153.2582174","url":null,"abstract":"This document describes the software framework of an ocular biometric system. The framework encompasses several interconnected components that allow an end-user to perform biometric enrollment, verification, and identification with most common eye tracking devices. The framework, written in C#, includes multiple state-of-the-art biometric algorithms and information fusion techniques, and can be easily extended to utilize new biometric techniques and eye tracking devices.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125862938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Krzysztof Krejtz, T. Szmidt, A. Duchowski, I. Krejtz
{"title":"Entropy-based statistical analysis of eye movement transitions","authors":"Krzysztof Krejtz, T. Szmidt, A. Duchowski, I. Krejtz","doi":"10.1145/2578153.2578176","DOIUrl":"https://doi.org/10.1145/2578153.2578176","url":null,"abstract":"The paper introduces a two-step method of quantifying eye movement transitions between Areas of Interests (AOIs). First, individuals' gaze switching patterns, represented by fixated AOI sequences, are modeled as Markov chains. Second, Shannon's entropy coefficient of the fit Markov model is computed to quantify the complexity of individual switching patterns. To determine the overall distribution of attention over AOIs, the entropy coefficient of individuals' stationary distribution of fixations is calculated. The novelty of the method is that it captures the variability of individual differences in eye movement characteristics, which are then summarized statistically. The method is demonstrated on gaze data collected during free viewing of classical art paintings. Shannon's coefficient derived from individual transition matrices is significantly related to participants' individual differences as well as to their aesthetic experience of art pieces.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123625049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mayberry, Pan Hu, Benjamin M Marlin, C. Salthouse, Deepak Ganesan
{"title":"iShadow: the computational eyeglass system","authors":"A. Mayberry, Pan Hu, Benjamin M Marlin, C. Salthouse, Deepak Ganesan","doi":"10.1145/2578153.2582177","DOIUrl":"https://doi.org/10.1145/2578153.2582177","url":null,"abstract":"Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We use a sparse pixel-based gaze estimation algorithm that is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function which minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 4 degrees.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122253077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The relative contributions of internal motor cues and external semantic cues to anticipatory smooth pursuit","authors":"Nicholas M. Ross, Elio M. Santos","doi":"10.1145/2578153.2578179","DOIUrl":"https://doi.org/10.1145/2578153.2578179","url":null,"abstract":"Smooth pursuit eye movements anticipate the future motion of targets when future motion is either signaled by visual cues or inferred from past history. To study the effect of anticipation derived from movement planning, the eye pursued a cursor whose horizontal motion was controlled by the hand via a mouse. The direction of a critical turn was specified by a cue or was freely chosen. Information from planning to move the hand (which itself showed anticipatory effects) elicited anticipatory smooth eye movements, allowing the eye to track self-generated target motion with virtually no lag. Lags were present only when either visual cues or motor cues were removed. The results show that information derived from the planning of movement is as effective as visual cues in generating anticipatory eye movements. Eye movements in dynamic environments will be facilitated by collaborative anticipatory movements of hand and eye. Cues derived from movement planning may be particularly valuable in fast-paced human-computer interactions.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117266676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Paulin Hansen, A. Alapetite, I. Scott MacKenzie, Emilie Møllenbach
{"title":"The use of gaze to control drones","authors":"John Paulin Hansen, A. Alapetite, I. Scott MacKenzie, Emilie Møllenbach","doi":"10.1145/2578153.2578156","DOIUrl":"https://doi.org/10.1145/2578153.2578156","url":null,"abstract":"This paper presents an experimental investigation of gaze-based control modes for unmanned aerial vehicles (UAVs or \"drones\"). Ten participants performed a simple flying task. We gathered empirical measures, including task completion time, and examined the user experience for difficulty, reliability, and fun. Four control modes were tested, with each mode applying a combination of x-y gaze movement and manual (keyboard) input to control speed (pitch), altitude, rotation (yaw), and drafting (roll). Participants had similar task completion times for all four control modes, but one combination was considered significantly more reliable than the others. We discuss design and performance issues for the gaze-plus-manual split of controls when drones are operated using gaze in conjunction with tablets, near-eye displays (glasses), or monitors.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131262643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What influences dwell time during source code reading?: analysis of element type and frequency as factors","authors":"T. Busjahn, R. Bednarik, Carsten Schulte","doi":"10.1145/2578153.2578211","DOIUrl":"https://doi.org/10.1145/2578153.2578211","url":null,"abstract":"While knowledge about reading behavior in natural-language text is abundant, little is known about the visual attention distribution when reading source code of computer programs. Yet, this knowledge is important for teaching programming skills as well as designing IDEs and programming languages. We conducted a study in which 15 programmers with various expertise read short source codes and recorded their eye movements. In order to study attention distribution on code elements, we introduced the following procedure: First we (pre)-processed the eye movement data using log-transformation. Taking into account the word lengths, we then analyzed the time spent on different lexical elements. It shows that most attention is oriented towards understanding of identifiers, operators, keywords and literals, relatively little reading time is spent on separators. We further inspected the attention on keywords and provide a description of the gaze on these primary building blocks for any formal language. The analysis indicates that approaches from research on natural-language text reading can be applied to source code as well, however not without review.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127346313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}