{"title":"Seeing in time: an investigation of entrainment and visual processing in toddlers","authors":"Hsing-fen Tu","doi":"10.1145/3204493.3207418","DOIUrl":"https://doi.org/10.1145/3204493.3207418","url":null,"abstract":"Recent neurophysiological and behavioral studies have provided strong evidence of rhythmic entrainment in the perceptual level in adults. The present study examines if rhythmic auditory stimulation synchronized with visual stimuli and fast tempi could enhance the visual processing in toddlers. Two groups of participants with different musical experiences are recruited. A head-mounted camera will be used to investigate perceptual entrainment when participants perform visual search tasks.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133809329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fixation-indices based correlation between text and image visual features of webpages","authors":"Sandeep Vidyapu, V. Saradhi, S. Bhattacharya","doi":"10.1145/3204493.3204566","DOIUrl":"https://doi.org/10.1145/3204493.3204566","url":null,"abstract":"Web elements associate with a set of visual features based on their data modality. For example, text associated with font-size and font-family whereas images associate with intensity and color. The unavailability of methods to relate these heterogeneous visual features limiting the attention-based analyses on webpages. In this paper, we propose a novel approach to establish the correlation between text and image visual features that influence users' attention. We pair the visual features of text and images based on their associated fixation-indices obtained from eye-tracking. From paired data, a common subspace is learned using Canonical Correlation Analysis (CCA) to maximize the correlation between them. The performance of the proposed approach is analyzed through a controlled eye-tracking experiment conducted on 51 real-world webpages. A very high correlation of 99.48% is achieved between text and images with text related font families and image related color features influencing the correlation.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116131841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nora Castner, Enkelejda Kasneci, Thomas C. Kübler, K. Scheiter, Juliane Richter, Thérése F. Eder, F. Hüttig, C. Keutel
{"title":"Scanpath comparison in medical image reading skills of dental students: distinguishing stages of expertise development","authors":"Nora Castner, Enkelejda Kasneci, Thomas C. Kübler, K. Scheiter, Juliane Richter, Thérése F. Eder, F. Hüttig, C. Keutel","doi":"10.1145/3204493.3204550","DOIUrl":"https://doi.org/10.1145/3204493.3204550","url":null,"abstract":"A popular topic in eye tracking is the difference between novices and experts and their domain-specific eye movement behaviors. However, very little is researched regarding how expertise develops, and more specifically, the developmental stages of eye movement behaviors. Our work compares the scanpaths of five semesters of dental students viewing orthopantomograms (OPTs) with classifiers to distinguish sixth semester through tenth semester students. We used the analysis algorithm SubsMatch 2.0 and the Needleman-Wunsch algorithm. Overall, both classifiers were able distinguish the stages of expertise in medical image reading above chance level. Specifically, it was able to accurately determine sixth semester students with no prior training as well as sixth semester students after training. Ultimately, using scanpath models to recognize gaze patterns characteristic of learning stages, we can provide more adaptive, gaze-based training for students.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123237911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Birte Gestefeld, A. Grillini, J. Marsman, F. Cornelissen
{"title":"Using eye tracking to simplify screening for visual field defects and improve vision rehabilitation: extended abstract","authors":"Birte Gestefeld, A. Grillini, J. Marsman, F. Cornelissen","doi":"10.1145/3204493.3207414","DOIUrl":"https://doi.org/10.1145/3204493.3207414","url":null,"abstract":"My thesis will encompass two main research objectives: (1) Evaluation of eye tracking as a method to screen for visual field defects. (2) Investigating how vision rehabilitation therapy can be improved by employing eye tracking.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124674730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Weier, T. Roth, André Hinkenjann, P. Slusallek
{"title":"Predicting the gaze depth in head-mounted displays using multiple feature regression","authors":"Martin Weier, T. Roth, André Hinkenjann, P. Slusallek","doi":"10.1145/3204493.3204547","DOIUrl":"https://doi.org/10.1145/3204493.3204547","url":null,"abstract":"Head-mounted displays (HMDs) with integrated eye trackers have opened up a new realm for gaze-contingent rendering. The accurate estimation of gaze depth is essential when modeling the optical capabilities of the eye. Most recently multifocal displays are gaining importance, requiring focus estimates to control displays or lenses. Deriving the gaze depth solely by sampling the scene's depth at the point-of-regard fails for complex or thin objects as eye tracking is suffering from inaccuracies. Gaze depth measures using the eye's vergence only provide an accurate depth estimate for the first meter. In this work, we combine vergence measures and multiple depth measures into feature sets. This data is used to train a regression model to deliver improved estimates. We present a study showing that using multiple features allows for an accurate estimation of the focused depth (MSE<0.1m) over a wide range (first 6m).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121684622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Microsaccade detection using pupil and corneal reflection signals","authors":"D. Niehorster, M. Nyström","doi":"10.1145/3204493.3204573","DOIUrl":"https://doi.org/10.1145/3204493.3204573","url":null,"abstract":"In contemporary research, microsaccade detection is typically performed using the calibrated gaze-velocity signal acquired from a video-based eye tracker. To generate this signal, the pupil and corneal reflection (CR) signals are subtracted from each other and a differentiation filter is applied, both of which may prevent small microsaccades from being detected due to signal distortion and noise amplification. We propose a new algorithm where microsaccades are detected directly from uncalibrated pupil-, and CR signals. It is based on detrending followed by windowed correlation between pupil and CR signals. The proposed algorithm outperforms the most commonly used algorithm in the field (Engbert & Kliegl, 2003), in particular for small amplitude microsaccades that are difficult to see in the velocity signal even with the naked eye. We argue that it is advantageous to consider the most basic output of the eye tracker, i.e. pupil-, and CR signals, when detecting small microsaccades.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131125010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-made mobile gaze tracking for group studies","authors":"M. Toivanen, V. Salonen, M. Hannula","doi":"10.1145/3204493.3208347","DOIUrl":"https://doi.org/10.1145/3204493.3208347","url":null,"abstract":"Mobile gaze tracking does not need to be expensive. We have a self-made mobile gaze tracking system, consisting of glasses-like frame and a software which computes the gaze point. As the total cost of the equipment is less than thousand euros, we have prepared five devices which we use in group studies, to simultaneously estimate multiple students' gaze in classroom. The inexpensive mobile gaze tracking technology opens new possibilities for studying the attentional processes on a group level.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121225620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Audio-visual interaction in emotion perception for communication: doctoral symposium, extended abstract","authors":"M. D. Boer, D. Başkent, F. Cornelissen","doi":"10.1145/3204493.3207422","DOIUrl":"https://doi.org/10.1145/3204493.3207422","url":null,"abstract":"Information from multiple modalities contributes to recognizing emotions. While it is known interactions occur between modalities, it is unclear what characterizes these. These interactions, and changes in these interactions due to sensory impairments, are the main subject of this PhD project. This extended abstract for the Doctoral Symposium of ETRA 2018 describes the project; its background, what I hope to achieve, and some preliminary results.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smooth-i: smart re-calibration using smooth pursuit eye movements","authors":"Argenis Ramirez Gomez, Hans-Werner Gellersen","doi":"10.1145/3204493.3204585","DOIUrl":"https://doi.org/10.1145/3204493.3204585","url":null,"abstract":"Eye gaze for interaction is dependent on calibration. However, gaze calibration can deteriorate over time affecting the usability of the system. We propose to use motion matching of smooth pursuit eye movements and known motion on the display to determine when there is a drift in accuracy and use it as input for re-calibration. To explore this idea we developed Smooth-i, an algorithm that stores calibration points and updates them incrementally when inaccuracies are identified. To validate the accuracy of Smooth-i, we conducted a study with five participants and a remote eye tracker. A baseline calibration profile was used by all participants to test the accuracy of the Smooth-i re-calibration following interaction with moving targets. Results show that Smooth-i is able to manage re-calibration efficiently, updating the calibration profile only when inaccurate data samples are detected.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133488724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A system to determine if learners know the divisibility rules and apply them correctly","authors":"P. Potgieter, P. Blignaut","doi":"10.1145/3204493.3204526","DOIUrl":"https://doi.org/10.1145/3204493.3204526","url":null,"abstract":"Mathematics teachers may find it challenging to manage the learning that takes place in learners' minds. Typical true/false or multiple choice assessments, whether in oral, written or electronic format, do not provide evidence that learners applied the correct principles. A system was developed to analyse learners' gaze behaviour while they were determining whether a multi-digit dividend is divisible by a divisor. The system provides facilities for a teacher to set up tests and generate various types of quantitative and qualitative reports. The system was tested with a group of 16 learners from Grade 7 to Grade 10 in a pre-post experiment to investigate the effect of revision on their performance. It was proven that, with tests that are carefully compiled according to a set of heuristics, eye tracking can be used to determine whether learners use the correct strategy when applying divisibility rules.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132324106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}