Proceedings of the Symposium on Eye Tracking Research and Applications最新文献

筛选
英文 中文
Starting to get bored: an outdoor eye tracking study of tourists exploring a city panorama 开始感到无聊了:一项关于游客探索城市全景的户外眼动追踪研究
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578216
P. Kiefer, I. Giannopoulos, Dominik Kremer, C. Schlieder, M. Raubal
{"title":"Starting to get bored: an outdoor eye tracking study of tourists exploring a city panorama","authors":"P. Kiefer, I. Giannopoulos, Dominik Kremer, C. Schlieder, M. Raubal","doi":"10.1145/2578153.2578216","DOIUrl":"https://doi.org/10.1145/2578153.2578216","url":null,"abstract":"Predicting the moment when a visual explorer of a place loses interest and starts to get bored is of considerable importance to the design of touristic information services. This paper investigates factors affecting the duration of the visual exploration of a city panorama. We report on an empirical outdoor eye tracking study in the real world with tourists following a free exploration paradigm without a time limit. As main result, the number of areas of interest revisited during a short period was found to be a good predictor for the total exploration duration.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126844197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Improving cross-ratio-based eye tracking techniques by leveraging the binocular fixation constraint 利用双眼注视约束改进基于交叉比的眼动追踪技术
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578202
Zhengyou Zhang, Q. Cai
{"title":"Improving cross-ratio-based eye tracking techniques by leveraging the binocular fixation constraint","authors":"Zhengyou Zhang, Q. Cai","doi":"10.1145/2578153.2578202","DOIUrl":"https://doi.org/10.1145/2578153.2578202","url":null,"abstract":"The cross-ratio approach has recently attracted increasing attention in eye-gaze tracking due to its simplicity in setting up a tracking system. Its accuracy, however, is lower than that of the model-based approach, and substantial efforts have been devoted to improving its accuracy. Binocular fixation is essential for humans to have good depth perception, and this paper presents a technique leveraging this constraint. It is used in two ways: First, in estimating jointly the homography matrices for both eyes, and second, in estimating the eye gaze itself. Experimental results with both synthetic and real data show that the proposed approach produces significantly better results than using a single eye and also better than averaging the independent results from the two eyes.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129022672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Assessment of the improvement of signal recorded in infant EEG by using eye tracking algorithms 用眼动追踪算法评价婴儿脑电图记录信号的改善
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2583041
E. Martínez
{"title":"Assessment of the improvement of signal recorded in infant EEG by using eye tracking algorithms","authors":"E. Martínez","doi":"10.1145/2578153.2583041","DOIUrl":"https://doi.org/10.1145/2578153.2583041","url":null,"abstract":"Event-related potentials (ERPs) elicited by visual stimuli consist in showing the same stimuli to the subject dozens of times while recording the electrical brain activity and averaging afterwards the EEG signal of the valid trials to get rid of the general brain activity and keep the response generated by the stimuli. ERPs are a common methodology used among cognitive developmental scientists to investigate how infants develop because responses to external events can be observed in ERP without specific behavioral requirements from the infants. However, applying this technique to infants has some disadvantages that are not found in adult participants. These are mainly the limited attention span and the difficulty of getting enough free-artifact trials due to movement artifacts and lack of attention to the stimuli. These limitations are the main reason for the current attrition rates in infant ERP studies, which are expected of between 50%-75% [DeBoer et al., 2007; Stets et al., 2012].","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133403657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting an observer's task using multi-fixation pattern analysis 用多注视模式分析预测观察者的任务
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578208
Christopher Kanan, Nicholas A. Ray, D. Bseiso, J. Hsiao, G. Cottrell
{"title":"Predicting an observer's task using multi-fixation pattern analysis","authors":"Christopher Kanan, Nicholas A. Ray, D. Bseiso, J. Hsiao, G. Cottrell","doi":"10.1145/2578153.2578208","DOIUrl":"https://doi.org/10.1145/2578153.2578208","url":null,"abstract":"Since Yarbus's seminal work in 1965, vision scientists have argued that people's eye movement patterns differ depending upon their task. This suggests that we may be able to infer a person's task (or mental state) from their eye movements alone. Recently, this was attempted by Greene et al. [2012] in a Yarbus-like replication study; however, they were unable to successfully predict the task given to their observer. We reanalyze their data, and show that by using more powerful algorithms it is possible to predict the observer's task. We also used our algorithms to infer the image being viewed by an observer and their identity. More generally, we show how off-the-shelf algorithms from machine learning can be used to make inferences from an observer's eye movements, using an approach we call Multi-Fixation Pattern Analysis (MFPA).","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133866832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
An investigation into determining head pose for gaze estimation on unmodified mobile devices 在未修改的移动设备上确定头部姿势进行凝视估计的研究
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578184
Stephen Ackland, H. Istance, S. Coupland, Stephen Vickers
{"title":"An investigation into determining head pose for gaze estimation on unmodified mobile devices","authors":"Stephen Ackland, H. Istance, S. Coupland, Stephen Vickers","doi":"10.1145/2578153.2578184","DOIUrl":"https://doi.org/10.1145/2578153.2578184","url":null,"abstract":"Traditionally, devices which are able to determine a users gaze are large, expensive and often restrictive. We investigate the prospect of using common webcams and mobile devices such as laptops, tablets and phones without modification as an alternative means for obtaining a users gaze. A person's gaze can be fundamentally determined by the pose of the head as well as the orientation of the eyes. This initial work investigates the first of these factors - an estimate of the 3D head pose (and subsequently the positions of the eye centres) relative to a camera device. Specifically, we seek a low cost algorithm that requires only a one-time calibration for an individual user, that can run in real-time on the aforementioned mobile devices with noisy camera data. We use our head tracker to estimate the 4 eye corners of a user over a 10 second video. We present the results at several different frames per second (fps) to analyse the impact on the tracker with lower quality cameras. We show that our algorithm is efficient enough to run at 75fps on a common laptop, but struggles with tracking loss when the fps is lower than 10fps.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133193948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction 基于模型的多模态交互获取与分析,改善人机交互
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2582176
Patrick Renner, Thies Pfeiffer
{"title":"Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction","authors":"Patrick Renner, Thies Pfeiffer","doi":"10.1145/2578153.2582176","DOIUrl":"https://doi.org/10.1145/2578153.2582176","url":null,"abstract":"For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular interest. However, the analysis of gaze and gestures is a time-consuming task: Usually, manual annotation of the eye tracker's scene camera video is necessary in a frame-by-frame manner. To tackle this issue, based on the EyeSee3D method, an automatic approach for annotating interactions is presented: A combination of geometric modeling and 3D marker tracking serves to align real world stimuli with virtual proxies. This is done based on the scene camera images of the mobile eye tracker alone. In addition to the EyeSee3D approach, face detection is used to automatically detect fixations on the interlocutor. For the acquisition of the gestures, an optical marker tracking system is integrated and fused in the multimodal representation of the communicative situation.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124867062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A dynamic graph visualization perspective on eye movement data 眼动数据的动态图形可视化视角
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578175
Michael Burch, Fabian Beck, Michael Raschke, Tanja Blascheck, D. Weiskopf
{"title":"A dynamic graph visualization perspective on eye movement data","authors":"Michael Burch, Fabian Beck, Michael Raschke, Tanja Blascheck, D. Weiskopf","doi":"10.1145/2578153.2578175","DOIUrl":"https://doi.org/10.1145/2578153.2578175","url":null,"abstract":"During eye tracking studies, vast amounts of spatio-temporal data in the form of eye gaze trajectories are recorded. Finding insights into these time-varying data sets is a challenging task. Visualization techniques such as heat maps or gaze plots help find patterns in the data but highly aggregate the data (heat maps) or are difficult to read due to overplotting (gaze plots). In this paper, we propose transforming eye movement data into a dynamic graph data structure to explore the visualization problem from a new perspective. By aggregating gaze trajectories of participants over time periods or Areas of Interest (AOIs), a fair trade-off between aggregation and details is achieved. We show that existing dynamic graph visualizations can be used to display the transformed data and illustrate the approach by applying it to eye tracking data recorded for investigating the readability of tree diagrams.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122420260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Comparing mouse and MAGIC pointing for moving target acquisition 比较鼠标和魔术指向移动目标的获取
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578172
Jutta Hild, D. Gill, J. Beyerer
{"title":"Comparing mouse and MAGIC pointing for moving target acquisition","authors":"Jutta Hild, D. Gill, J. Beyerer","doi":"10.1145/2578153.2578172","DOIUrl":"https://doi.org/10.1145/2578153.2578172","url":null,"abstract":"Moving target acquisition is a challenging and manually stressful task if performed using an all-manual, pointer-based interaction technique like mouse interaction, especially if targets are small, move fast, and are visible on screen only for a limited time. The MAGIC pointing interaction approach combines the precision of manual, pointer-based interaction with the speed and little manual stress of eye pointing. In this contribution, a pilot study with twelve participants on moving target acquisition is presented using an abstract experimental task derived from a video analysis scenario. Mouse input, conservative MAGIC pointing and MAGIC button are compared considering acquisition time, error rate, and user satisfaction. Although none of the participants had used MAGIC pointing before, eight participants voted for MAGIC button being their favorite technique; participants performed with only slightly higher mean acquisition time and error rate than with the familiar mouse input. Conservative MAGIC pointing was preferred by three participants; however, mean acquisition time and error rate were significantly worse than with mouse input.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127503406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Saccade plots 扫视情节
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2578205
Michael Burch, H. Schmauder, Michael Raschke, D. Weiskopf
{"title":"Saccade plots","authors":"Michael Burch, H. Schmauder, Michael Raschke, D. Weiskopf","doi":"10.1145/2578153.2578205","DOIUrl":"https://doi.org/10.1145/2578153.2578205","url":null,"abstract":"Visualization by heat maps is a powerful technique for showing frequently visited areas in displayed stimuli. However, by aggregating the spatio-temporal data, heat maps lose the information about the transitions between fixations, i.e., the saccades. In gaze plots, instead, trajectories are shown as overplotted polylines, leading to much visual clutter, which makes those diagrams difficult to read. In this paper, we introduce Saccade Plots as a novel technique that combines the benefits of both approaches: it shows the gaze frequencies as a heat map and the saccades in the form of color-coded triangular matrices that surround the heat map. We illustrate the usefulness of our technique by applying it to a representative example from a previously conducted eye tracking study.","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126546721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A smooth pursuit calibration technique 一种平滑追踪校准技术
Proceedings of the Symposium on Eye Tracking Research and Applications Pub Date : 2014-03-26 DOI: 10.1145/2578153.2583042
Feridun M. Celebi, Elizabeth S. Kim, Quan Wang, Carla A. Wall, F. Shic
{"title":"A smooth pursuit calibration technique","authors":"Feridun M. Celebi, Elizabeth S. Kim, Quan Wang, Carla A. Wall, F. Shic","doi":"10.1145/2578153.2583042","DOIUrl":"https://doi.org/10.1145/2578153.2583042","url":null,"abstract":"Many different eye-tracking calibration techniques have been developed [e.g. see Talmi and Liu 1999; Zhu and Ji 2007]. A community standard is a 9-point-sparse calibration that relies on sequential presentation of known scene targets. However, fixating different points has been described as tedious, dull and tiring for the eye [Bulling, Gellersen, Pfeuffer, Turner and Vidal 2013].","PeriodicalId":142459,"journal":{"name":"Proceedings of the Symposium on Eye Tracking Research and Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133406974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信