Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications最新文献

筛选
英文 中文
Art facing science: Artistic heuristics for face detection: tracking gaze when looking at faces 艺术面对科学:人脸检测的艺术启发式:注视人脸时的注视追踪
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317958.3319809
A. Duchowski, Nina A. Gehrer, M. Schönenberg, Krzysztof Krejtz
{"title":"Art facing science: Artistic heuristics for face detection: tracking gaze when looking at faces","authors":"A. Duchowski, Nina A. Gehrer, M. Schönenberg, Krzysztof Krejtz","doi":"10.1145/3317958.3319809","DOIUrl":"https://doi.org/10.1145/3317958.3319809","url":null,"abstract":"Automatic Area Of Interest (AOI) demarcation of facial regions is not yet commonplace in applied eye-tracking research, partially because automatic AOI labeling is prone to error. Most previous eye-tracking studies relied on manual frame-by-frame labeling of facial AOIs. We present a fully automatic approach for facial AOI labeling (i.e., eyes, nose, mouth) and gaze registration within those AOIs, based on modern computer vision techniques combined with heuristics drawn from art. We discuss details in computing gaze analytics, provide proof-of-concept, and a short validation against what we consider ground truth. Relative dwell time over expected AOIs exceeded 98% showing efficacy of the approach.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"11 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123279899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Task classification model for visual fixation, exploration, and search 视觉固定、探索和搜索任务分类模型
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3323073
Ayush Kumar, Anjul Tyagi, Michael Burch, D. Weiskopf, K. Mueller
{"title":"Task classification model for visual fixation, exploration, and search","authors":"Ayush Kumar, Anjul Tyagi, Michael Burch, D. Weiskopf, K. Mueller","doi":"10.1145/3314111.3323073","DOIUrl":"https://doi.org/10.1145/3314111.3323073","url":null,"abstract":"Yarbus' claim to decode the observer's task from eye movements has received mixed reactions. In this paper, we have supported the hypothesis that it is possible to decode the task. We conducted an exploratory analysis on the dataset by projecting features and data points into a scatter plot to visualize the nuance properties for each task. Following this analysis, we eliminated highly correlated features before training an SVM and Ada Boosting classifier to predict the tasks from this filtered eye movements data. We achieve an accuracy of 95.4% on this task classification problem and hence, support the hypothesis that task classification is possible from a user's eye movement data.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123709985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Time- and space-efficient eye tracker calibration 时间和空间效率高的眼动仪校准
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319818
Heiko Drewes, Ken Pfeuffer, Florian Alt
{"title":"Time- and space-efficient eye tracker calibration","authors":"Heiko Drewes, Ken Pfeuffer, Florian Alt","doi":"10.1145/3314111.3319818","DOIUrl":"https://doi.org/10.1145/3314111.3319818","url":null,"abstract":"One of the obstacles to bring eye tracking technology to everyday human computer interactions is the time consuming calibration procedure. In this paper we investigate a novel calibration method based on smooth pursuit eye movement. The method uses linear regression to calculate the calibration mapping. The advantage is that users can perform the calibration quickly in a few seconds and only use a small calibration area to cover a large tracking area. We first describe the theoretical background on establishing a calibration mapping and discuss differences of calibration methods used. We then present a user study comparing the new regression-based method with a classical nine-point and with other pursuit-based calibrations. The results show the proposed method is fully functional, quick, and enables accurate tracking of a large area. The method has the potential to be integrated into current eye tracking systems to make them more usable in various use cases.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117058171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A comparative study of eye tracking and hand controller for aiming tasks in virtual reality 眼动追踪与手控在虚拟现实瞄准任务中的比较研究
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3317956.3318153
Francisco López Luro, V. Sundstedt
{"title":"A comparative study of eye tracking and hand controller for aiming tasks in virtual reality","authors":"Francisco López Luro, V. Sundstedt","doi":"10.1145/3317956.3318153","DOIUrl":"https://doi.org/10.1145/3317956.3318153","url":null,"abstract":"Aiming is key for virtual reality (VR) interaction, and it is often done using VR controllers. Recent eye-tracking integrations in commercial VR head-mounted displays (HMDs) call for further research on usability and performance aspects to better determine possibilities and limitations. This paper presents a user study exploring gaze aiming in VR compared to a traditional controller in an \"aim and shoot\" task. Different speeds of targets and trajectories were studied. Qualitative data was gathered using the system usability scale (SUS) and cognitive load (NASA TLX) questionnaires. Results show a lower perceived cognitive load using gaze aiming and on par usability scale. Gaze aiming produced on par task duration but lower accuracy on most conditions. Lastly, the trajectory of the target significantly affected the orientation of the HMD in relation to the target's location. The results show potential using gaze aiming in VR and motivate further research.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116310047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
EyeFlow
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319820
Almoctar Hassoumi, Vsevolod Peysakhovich, Christophe Hurter
{"title":"EyeFlow","authors":"Almoctar Hassoumi, Vsevolod Peysakhovich, Christophe Hurter","doi":"10.1145/3314111.3319820","DOIUrl":"https://doi.org/10.1145/3314111.3319820","url":null,"abstract":"We investigate the smooth pursuit eye movement based interaction using an unmodified off-the-shelf RGB camera. In each pair of sequential video frames, we compute the indicative direction of the eye movement by analyzing flow vectors obtained using the Lucas-Kanade optical flow algorithm. We discuss how carefully selected low vectors could replace the traditional pupil centers detection in smooth pursuit interaction. We examine implications of unused features in the eye camera imaging frame as potential elements for detecting gaze gestures. This simple approach is easy to implement and abstains from many of the complexities of pupil based approaches. In particular, EyeFlow does not call for either a 3D pupil model or 2D pupil detection to track the pupil center location. We compare this method to state-of-the-art approaches and ind that this can enable pursuit interactions with standard cameras. Results from the evaluation with 12 users data yield an accuracy that compares to previous studies. In addition, the benefit of this work is that the approach does not necessitate highly matured computer vision algorithms and expensive IR-pass cameras.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127991548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Reducing calibration drift in mobile eye trackers by exploiting mobile phone usage 利用手机使用减少移动眼动仪的校准漂移
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319918
P. Müller, Daniel Buschek, Michael Xuelin Huang, A. Bulling
{"title":"Reducing calibration drift in mobile eye trackers by exploiting mobile phone usage","authors":"P. Müller, Daniel Buschek, Michael Xuelin Huang, A. Bulling","doi":"10.1145/3314111.3319918","DOIUrl":"https://doi.org/10.1145/3314111.3319918","url":null,"abstract":"Automatic saliency-based recalibration is promising for addressing calibration drift in mobile eye trackers but existing bottom-up saliency methods neglect user's goal-directed visual attention in natural behaviour. By inspecting real-life recordings of egocentric eye tracker cameras, we reveal that users are likely to look at their phones once these appear in view. We propose two novel automatic recalibration methods that exploit mobile phone usage: The first builds saliency maps using the phone location in the egocentric view to identify likely gaze locations. The second uses the occurrence of touch events to recalibrate the eye tracker, thereby enabling privacy-preserving recalibration. Through in-depth evaluations on a recent mobile eye tracking dataset (N=17, 65 hours) we show that our approaches outperform a state-of-the-art saliency approach for automatic recalibration. As such, our approach improves mobile eye tracking and gaze-based interaction, particularly for long-term use.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133782109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Quantifying and understanding the differences in visual activities with contrast subsequences 通过对比子序列量化和理解视觉活动的差异
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319842
Yu Li, Carla M. Allen, C. Shyu
{"title":"Quantifying and understanding the differences in visual activities with contrast subsequences","authors":"Yu Li, Carla M. Allen, C. Shyu","doi":"10.1145/3314111.3319842","DOIUrl":"https://doi.org/10.1145/3314111.3319842","url":null,"abstract":"Understanding differences and similarities between scanpaths has been one of the primary goals for eye tracking research. Sequences of areas of interest mapped from fixations are a major focus for many analytic techniques since these sequences directly relate to the semantic meaning of the visual input. Many studies analyze complete sequences while overlooking the micro-transitions in subsequences. In this paper, we propose a method which extracts subsequences as features and finds contrasting patterns between different viewer groups. The contrast patterns help domain experts to quantify variations between visual activities and understand reasoning processes for complex visual tasks. Experiments were conducted with 39 expert and novice radiographers using nine radiology images corresponding to nine levels of task complexity. Identified contrast patterns, validated by an expert, prove that the method effectively reveals visual reasoning processes that are otherwise hidden.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130982586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Attentional orienting in real and virtual 360-degree environments: applications to aeronautics 真实和虚拟360度环境中的注意力定向:在航空学中的应用
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3322871
Rébaï Soret, C. Hurter, Vsevolod Peysakhovich
{"title":"Attentional orienting in real and virtual 360-degree environments: applications to aeronautics","authors":"Rébaï Soret, C. Hurter, Vsevolod Peysakhovich","doi":"10.1145/3314111.3322871","DOIUrl":"https://doi.org/10.1145/3314111.3322871","url":null,"abstract":"We investigate the mechanisms of attentional orienting in a 360-degree virtual environments. Through the use of Posner's paradigm, we study the effects of different attentional guidance techniques designed to improve information processing. The most efficient technique will be applied to a procedure learning tool in virtual reality and a remote air traffic control tower. The eye-tracker allows us to explore the differential effects of overt and covert orienting, to estimate the effectiveness of visual research and to use it as a technique for interaction in virtual reality.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116474077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reading detection in real-time 实时读取检测
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3319916
Conor Kelton, Zijun Wei, Seoyoung Ahn, A. Balasubramanian, Samir R Das, D. Samaras, G. Zelinsky
{"title":"Reading detection in real-time","authors":"Conor Kelton, Zijun Wei, Seoyoung Ahn, A. Balasubramanian, Samir R Das, D. Samaras, G. Zelinsky","doi":"10.1145/3314111.3319916","DOIUrl":"https://doi.org/10.1145/3314111.3319916","url":null,"abstract":"Observable reading behavior, the act of moving the eyes over lines of text, is highly stereotyped among the users of a language, and this has led to the development of reading detectors-methods that input windows of sequential fixations and output predictions of the fixation behavior during those windows being reading or skimming. The present study introduces a new method for reading detection using Region Ranking SVM (RRSVM). An SVM-based classifier learns the local oculomotor features that are important for real-time reading detection while it is optimizing for the global reading/skimming classification, making it unnecessary to hand-label local fixation windows for model training. This RRSVM reading detector was trained and evaluated using eye movement data collected in a laboratory context, where participants viewed modified web news articles and had to either read them carefully for comprehension or skim them quickly for the selection of keywords (separate groups). Ground truth labels were known at the global level (the instructed reading or skimming task), and obtained at the local level in a separate rating task. The RRSVM reading detector accurately predicted 82.5% of the global (article-level) reading/skimming behavior, with accuracy in predicting local window labels ranging from 72-95%, depending on how tuned the RRSVM was for local and global weights. With this RRSVM reading detector, a method now exists for near real-time reading detection without the need for hand-labeling of local fixation windows. With real-time reading detection capability comes the potential for applications ranging from education and training to intelligent interfaces that learn what a user is likely to know based on previous detection of their reading behavior.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126362634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
The vision and interpretation of paintings: bottom-up visual processes, top-down culturally informed attention, and aesthetic experience 绘画的视觉和解读:自下而上的视觉过程,自上而下的文化信息关注和审美体验
Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications Pub Date : 2019-06-25 DOI: 10.1145/3314111.3322870
Pablo Fontoura, Jeannette Schaeffer, M. Menu
{"title":"The vision and interpretation of paintings: bottom-up visual processes, top-down culturally informed attention, and aesthetic experience","authors":"Pablo Fontoura, Jeannette Schaeffer, M. Menu","doi":"10.1145/3314111.3322870","DOIUrl":"https://doi.org/10.1145/3314111.3322870","url":null,"abstract":"This PhD thesis aims to contribute to our knowledge about how we experience paintings and more specifically, about how visual exploration, cognitive categorization and emotive evaluation contribute to the aesthetic dimension. [Schaeffer 2015; Leder et al. 2004] of our experience of paintings. [Molnar 1981; Gombrich 1960; Bandaxall 1986; Bandaxall 1982; Bandaxall 1984] To this purpose, we use eye-tracking technology at Musée Unterlinden to record the vision of 52 participants looking at the Isenheim altarpiece before and after restoration. The first results before restoration allowed us to identify and classify the zones of visual salience as well as the effects of participants' backgrounds and emotions on fixation time and visual attention to different areas of interest. This analysis will be further compared with data collected in a similar study after restoration.","PeriodicalId":161901,"journal":{"name":"Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116825241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信