Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications最新文献

筛选
英文 中文
Getting (more) real: bringing eye movement classification to HMD experiments with equirectangular stimuli 变得(更)真实:将眼动分类引入等方刺激的HMD实验
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319829
I. Agtzidis, M. Dorr
{"title":"Getting (more) real: bringing eye movement classification to HMD experiments with equirectangular stimuli","authors":"I. Agtzidis, M. Dorr","doi":"10.1145/3314111.3319829","DOIUrl":"https://doi.org/10.1145/3314111.3319829","url":null,"abstract":"The classification of eye movements is a very important part of eye tracking research and has been studied since its early days. Over recent years, we have experienced an increasing shift towards more immersive experimental scenarios with the use of eye-tracking enabled glasses and head-mounted displays. In these new scenarios, however, most of the existing eye movement classification algorithms cannot be applied robustly anymore because they were developed with monitor-based experiments using regular 2D images and videos in mind. In this paper, we describe two approaches that reduce artifacts of eye movement classification for 360° videos shown in head-mounted displays. For the first approach, we discuss how decision criteria have to change in the space of 360° videos, and use these criteria to modify five popular algorithms from the literature. The modified algorithms are publicly available at https://web.gin.g-node.org/ioannis.agtzidis/360_em_algorithms. For cases where an existing algorithm cannot be modified, e.g. because it is closed-source, we present a second approach that maps the data instead of the algorithm to the 360° space. An empirical evaluation of both approaches shows that they significantly reduce the artifacts of the initial algorithm, especially in the areas further from the horizontal midline.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121143879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring simple neural network architectures for eye movement classification 探索眼动分类的简单神经网络架构
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319813
Jonas Goltz, M. Grossberg, Ronak Etemadpour
{"title":"Exploring simple neural network architectures for eye movement classification","authors":"Jonas Goltz, M. Grossberg, Ronak Etemadpour","doi":"10.1145/3314111.3319813","DOIUrl":"https://doi.org/10.1145/3314111.3319813","url":null,"abstract":"Analysis of eye-gaze is a critical tool for studying human-computer interaction and visualization. Yet eye tracking systems only report eye-gaze on the scene by producing large volumes of coordinate time series data. To be able to use this data, we must first extract salient events such as eye fixations, saccades, and post-saccadic oscillations (PSO). Manually extracting these events is time-consuming, labor-intensive and subject to variability. In this paper, we present and evaluate simple and fast automatic solutions for eye-gaze analysis based on supervised learning. Similar to some recent studies, we developed different simple neural networks demonstrating that feature learning produces superior results in identifying events from sequences of gaze coordinates. We do not apply any ad-hoc post-processing, thus creating a fully automated end-to-end algorithms that perform as good as current state-of-the-art architectures. Once trained they are fast enough to be run in a near real time setting.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125320582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Characterizing joint attention behavior during real world interactions using automated object and gaze detection 使用自动对象和凝视检测来描述真实世界交互中的共同注意行为
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319843
Pranav Venuprasad, Tushar Dobhal, Anurag Paul, Tu N. Nguyen, A. Gilman, P. Cosman, L. Chukoskie
{"title":"Characterizing joint attention behavior during real world interactions using automated object and gaze detection","authors":"Pranav Venuprasad, Tushar Dobhal, Anurag Paul, Tu N. Nguyen, A. Gilman, P. Cosman, L. Chukoskie","doi":"10.1145/3314111.3319843","DOIUrl":"https://doi.org/10.1145/3314111.3319843","url":null,"abstract":"Joint attention is an essential part of the development process of children, and impairments in joint attention are considered as one of the first symptoms of autism. In this paper, we develop a novel technique to characterize joint attention in real time, by studying the interaction of two human subjects with each other and with multiple objects present in the room. This is done by capturing the subjects' gaze through eye-tracking glasses and detecting their looks on predefined indicator objects. A deep learning network is trained and deployed to detect the objects in the field of vision of the subject by processing the video feed of the world view camera mounted on the eye-tracking glasses. The looking patterns of the subjects are determined and a real-time audio response is provided when a joint attention is detected, i.e., when their looks coincide. Our findings suggest a trade-off between the accuracy measure (Look Positive Predictive Value) and the latency of joint look detection for various system parameters. For more accurate joint look detection, the system has higher latency, and for faster detection, the detection accuracy goes down.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131262401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
EyeMRTK: a toolkit for developing eye gaze interactive applications in virtual and augmented reality EyeMRTK:用于在虚拟和增强现实中开发眼睛注视交互应用程序的工具包
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317956.3318155
D. Mardanbegi, Thies Pfeiffer
{"title":"EyeMRTK: a toolkit for developing eye gaze interactive applications in virtual and augmented reality","authors":"D. Mardanbegi, Thies Pfeiffer","doi":"10.1145/3317956.3318155","DOIUrl":"https://doi.org/10.1145/3317956.3318155","url":null,"abstract":"For head mounted displays, like they are used in mixed reality applications, eye gaze seems to be a natural interaction modality. EyeMRTK provides building blocks for eye gaze interaction in virtual and augmented reality. Based on a hardware abstraction layer, it allows interaction researchers and developers to focus on their interaction concepts, while enabling them to evaluate their ideas on all supported systems. In addition to that, the toolkit provides a simulation layer for debugging purposes, which speeds up prototyping during development on the desktop.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134234128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Looks can mean achieving: understanding eye gaze patterns of proficiency in code comprehension 外观可以意味着实现:理解眼睛注视模式,熟练理解代码
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3322876
Jonathan A. Saddler
{"title":"Looks can mean achieving: understanding eye gaze patterns of proficiency in code comprehension","authors":"Jonathan A. Saddler","doi":"10.1145/3314111.3322876","DOIUrl":"https://doi.org/10.1145/3314111.3322876","url":null,"abstract":"The research proposes four hypotheses that focus on deriving helpful insights from eye patterns, including hidden truths concerning programmer expertise, task context and difficulty. We present results from a study performed in a classroom setting with 17 students, in which we found that novice programmers visit output statements and declarations the same amount as the rest of the program they are presented other than control flow block headers. This research builds upon insightful findings from our previous work, wherein we focus on gathering statistical eye-gaze effects between categories of various populations to drive the pursuit of new research. Ongoing and future work entails using the iTrace infrastructure to capture gaze as participants scroll to read code pages extending longer than what can fit on one screen. The focus will be on building various models that relate eye gaze to comprehension via methods that realistically capture activity in a development environment.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134532831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Attentional orienting in virtual reality using endogenous and exogenous cues in auditory and visual modalities 在虚拟现实中使用听觉和视觉模式的内源性和外源性线索的注意定向
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317959.3321490
Rébaï Soret, Pom Charras, C. Hurter, Vsevolod Peysakhovich
{"title":"Attentional orienting in virtual reality using endogenous and exogenous cues in auditory and visual modalities","authors":"Rébaï Soret, Pom Charras, C. Hurter, Vsevolod Peysakhovich","doi":"10.1145/3317959.3321490","DOIUrl":"https://doi.org/10.1145/3317959.3321490","url":null,"abstract":"The virtual reality (VR) has nowadays numerous applications in training, education, and rehabilitation. To efficiently present the immersive 3D stimuli, we need to understand how spatial attention is oriented in VR. The efficiency of different cues can be compared using the Posner paradigm. In this study, we designed an ecological environment where participants were presented with a modified version of the Posner cueing paradigm. Twenty subjects equipped with an eye-tracking system and VR HMD performed a sandwich preparation task. They were asked to assemble the ingredients which could be either endogenously and exogenously cued in both auditory and visual modalities. The results showed that all valid cues made participants react faster. While directional arrow (visual endogenous) and 3D sound (auditory exogenous) oriented attention globally to the entire cued hemifield, the vocal instruction (auditory endogenous) and object highlighting (visual exogenous) allowed more local orientation, in a specific region of space. No differences in gaze shift initiation nor time to fixate the target were found suggesting the covert orienting.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114815407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Screen corner detection using polarization camera for cross-ratio based gaze estimation 基于交叉比注视估计的偏振相机屏幕角检测
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319814
M. Sasaki, Takashi Nagamatsu, K. Takemura
{"title":"Screen corner detection using polarization camera for cross-ratio based gaze estimation","authors":"M. Sasaki, Takashi Nagamatsu, K. Takemura","doi":"10.1145/3314111.3319814","DOIUrl":"https://doi.org/10.1145/3314111.3319814","url":null,"abstract":"Eye tracking, which measures line of sight, is expected to advance as an intuitive and rapid input method for user interfaces, and a cross-ratio based method that calculates the point-of-gaze using homography matrices has attracted attention because it does not require hardware calibration to determine the geometric relationship between an eye camera and a screen. However, this method requires near-infrared (NIR) light-emitting diodes (LEDs) attached to the display in order to detect screen corners. Consequently, LEDs must be installed around the display to estimate the point-of-gaze. Without these requirements, cross-ratio based gaze estimation can be distributed smoothly. Therefore, we propose the use of a polarization camera for detecting the screen area reflected on a corneal surface. The reflection area of display light is easily detected by the polarized image because the light radiated from the display is polarized linearly by the internal polarization filter. With the proposed method, the screen corners can be determined without using NIR LEDs, and the point-of-gaze can be estimated using the detected corners on the corneal surface. We investigated the accuracy of the estimated point-of-gaze based on a cross-ratio method under various illumination and display conditions. Cross-ratio based gaze estimation is expected to be utilized widely in commercial products because the proposed method does not require infrared light sources at display corners.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124500759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
iLid
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3322503
Soha Rostaminia, A. Mayberry, Deepak Ganesan, Benjamin M Marlin, Jeremy Gummeson
{"title":"iLid","authors":"Soha Rostaminia, A. Mayberry, Deepak Ganesan, Benjamin M Marlin, Jeremy Gummeson","doi":"10.1145/3314111.3322503","DOIUrl":"https://doi.org/10.1145/3314111.3322503","url":null,"abstract":"The ability to monitor eye closures and blink patterns has long been known to enable accurate assessment of fatigue and drowsiness in individuals. Many measures of the eye are known to be correlated with fatigue including coarse-grained measures like the rate of blinks as well as fine-grained measures like the duration of blinks and the extent of eye closures. Despite a plethora of research validating these measures, we lack wearable devices that can continually and reliably monitor them in the natural environment. In this work, we present a low-power system, iLid, that can continually sense fine-grained measures such as blink duration and Percentage of Eye Closures (PERCLOS) at high frame rates of 100fps. We present a complete solution including design of the sensing, signal processing, and machine learning pipeline and implementation on a prototype computational eyeglass platform.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121064758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A gaze model improves autonomous driving 凝视模型改善了自动驾驶
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319846
Congcong Liu, Y. Chen, L. Tai, Haoyang Ye, Ming Liu, Bertram E. Shi
{"title":"A gaze model improves autonomous driving","authors":"Congcong Liu, Y. Chen, L. Tai, Haoyang Ye, Ming Liu, Bertram E. Shi","doi":"10.1145/3314111.3319846","DOIUrl":"https://doi.org/10.1145/3314111.3319846","url":null,"abstract":"End-to-end behavioral cloning trained by human demonstration is now a popular approach for vision-based autonomous driving. A deep neural network maps drive-view images directly to steering commands. However, the images contain much task-irrelevant data. Humans attend to behaviorally relevant information using saccades that direct gaze towards important areas. We demonstrate that behavioral cloning also benefits from active control of gaze. We trained a conditional generative adversarial network (GAN) that accurately predicts human gaze maps while driving in both familiar and unseen environments. We incorporated the predicted gaze maps into end-to-end networks for two behaviors: following and overtaking. Incorporating gaze information significantly improves generalization to unseen environments. We hypothesize that incorporating gaze information enables the network to focus on task critical objects, which vary little between environments, and ignore irrelevant elements in the background, which vary greatly.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123840661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Quantitative visual attention prediction on webpage images using multiclass SVM 基于多类支持向量机的网页图像视觉注意力定量预测
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317960.3321614
Sandeep Vidyapu, Vijaya Saradhi Vedula, S. Bhattacharya
{"title":"Quantitative visual attention prediction on webpage images using multiclass SVM","authors":"Sandeep Vidyapu, Vijaya Saradhi Vedula, S. Bhattacharya","doi":"10.1145/3317960.3321614","DOIUrl":"https://doi.org/10.1145/3317960.3321614","url":null,"abstract":"Webpage images---image elements on a webpage---are prominent to draw user attention. Modeling attention on webpage images helps in their synthesis and rendering. This paper presents a visual feature-based attention prediction model for webpage images. Firstly, fixated images were assigned quantitative visual attention based on users' sequential attention allocation on webpages. Subsequently, fixated images' intrinsic visual features were extracted along with position and size on respective webpages. A multiclass support vector machine (multiclass SVM) was learned using the visual features and associated attention. In tandem, a majority-voting-scheme was employed to predict the quantitative visual attention for test webpage images. The proposed approach was analyzed through an eye-tracking experiment conducted on 36 real-world webpages with 42 participants. Our model outperforms (average accuracy of 91.64% and micro F1-score of 79.1%) the existing position and size constrained regression model (average accuracy of 73.92% and micro F1-score of 34.80%).","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121375548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信