2022 Symposium on Eye Tracking Research and Applications最新文献

筛选
英文 中文
Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection 眼注视散点图:使用隐式数据选择探索SPLOMs建议的概念和初步结果
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531165
Nils Rodrigues, Lin Shao, Jiazhen Yan, T. Schreck, D. Weiskopf
{"title":"Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection","authors":"Nils Rodrigues, Lin Shao, Jiazhen Yan, T. Schreck, D. Weiskopf","doi":"10.1145/3517031.3531165","DOIUrl":"https://doi.org/10.1145/3517031.3531165","url":null,"abstract":"We propose a three-step concept and visual design for supporting the visual exploration of high-dimensional data in scatterplots through eye-tracking. First, we extract subsets in the underlying data using existing classifications, automated clustering algorithms, or eye-tracking. For the latter, we map gaze to the underlying data dimensions in the scatterplot. Clusters of data points that have been the focus of the viewers’ gaze are marked as clusters of interest (eye-mind hypothesis). In a second step, our concept extracts various properties from statistics and scagnostics from the clusters. The third step uses these measures to compare the current data clusters from the main scatterplot to the same data in other dimensions. The results enable analysts to retrieve similar or dissimilar views as guidance to explore the entire data set. We provide a proof-of-concept implementation as a test bench and describe a use case to show a practical application and initial results.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131635131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
For Your Eyes Only: Privacy-preserving eye-tracking datasets 只为你的眼睛保护隐私的眼动追踪数据集
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529618
Brendan David-John, Kevin R. B. Butler, Eakta Jain
{"title":"For Your Eyes Only: Privacy-preserving eye-tracking datasets","authors":"Brendan David-John, Kevin R. B. Butler, Eakta Jain","doi":"10.1145/3517031.3529618","DOIUrl":"https://doi.org/10.1145/3517031.3529618","url":null,"abstract":"Eye-tracking is a critical source of information for understanding human behavior and developing future mixed-reality technology. Eye-tracking enables applications that classify user activity or predict user intent. However, eye-tracking datasets collected during common virtual reality tasks have also been shown to enable unique user identification, which creates a privacy risk. In this paper, we focus on the problem of user re-identification from eye-tracking features. We adapt standardized privacy definitions of k-anonymity and plausible deniability to protect datasets of eye-tracking features, and evaluate performance against re-identification by a standard biometric identification model on seven VR datasets. Our results demonstrate that re-identification goes down to chance levels for the privatized datasets, even as utility is preserved to levels higher than 72% accuracy in document type classification.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127608063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A gaze-based study design to explore how competency evolves during a photo manipulation task 一个基于凝视的研究设计,以探索能力如何演变在照片处理任务
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531634
Nora Castner, Béla Umlauf, Ard Kastrati, M. Płomecka, William Schaefer, Enkelejda Kasneci, Z. Bylinskii
{"title":"A gaze-based study design to explore how competency evolves during a photo manipulation task","authors":"Nora Castner, Béla Umlauf, Ard Kastrati, M. Płomecka, William Schaefer, Enkelejda Kasneci, Z. Bylinskii","doi":"10.1145/3517031.3531634","DOIUrl":"https://doi.org/10.1145/3517031.3531634","url":null,"abstract":"ACMReference Format: Nora Castner, Béla Umlauf, Ard Kastrati, Martyna Plomecka, William Schaefer, Enkelejda Kasneci, and Zoya Bylinskii. 2022. A gaze-based study design to explore how competency evolves during a photo manipulation task. In Symposium on Eye Tracking Research and Applications (ETRA ’22 Technical Abstracts), June 8–11, 2022, Seattle, Washington.ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3379155.3391320","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130754902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Introducing a Real-Time Advanced Eye Movements Analysis Pipeline 引入实时高级眼动分析管道
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532196
Gavindya Jayawardena
{"title":"Introducing a Real-Time Advanced Eye Movements Analysis Pipeline","authors":"Gavindya Jayawardena","doi":"10.1145/3517031.3532196","DOIUrl":"https://doi.org/10.1145/3517031.3532196","url":null,"abstract":"Real-Time Advanced Eye Movements Analysis Pipeline (RAEMAP) is an advanced pipeline to analyze traditional positional gaze measurements as well as advanced eye gaze measurements. The proposed implementation of RAEMAP includes real-time analysis of fixations, saccades, gaze transition entropy, and low/high index of pupillary activity. RAEMAP will also provide visualizations of fixations, fixations on AOIs, heatmaps, and dynamic AOI generation in real-time. This paper outlines the proposed architecture of RAEMAP.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133564078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Game Audio Impacts on Players’ Visual Attention, Model Performance for Cloud Gaming 游戏音频对玩家视觉注意力的影响,云游戏模型性能
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529621
Morva Saaty, M. Hashemi
{"title":"Game Audio Impacts on Players’ Visual Attention, Model Performance for Cloud Gaming","authors":"Morva Saaty, M. Hashemi","doi":"10.1145/3517031.3529621","DOIUrl":"https://doi.org/10.1145/3517031.3529621","url":null,"abstract":"Cloud gaming (CG) is a new approach to deliver a high-quality gaming experience to gamers anywhere, anytime, and on any device. To achieve this goal, CG requires a high bandwidth, which is still a major challenge. Many existing research pieces have focused on modeling or predicting the players’ Visual Attention Map (VAM) and allocating bitrate accordingly. Although studies indicate that both modalities of audio and video influence human perception, a few studies considered audio impacts in the cloud-based attention models. This paper demonstrates that the audio features in video games change the players’ VAMs in various game scenarios. Our findings indicated that incorporating game audio improves the accuracy of the predicted attention maps by 13% on average compared to the previous VAMs generated based on visual saliency by Game Attention Model for CG. The audio impact is more evident in video games with fewer visual components or indicators on the screen.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134569336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Holographic Single-Pixel Stereo Camera Sensor for Calibration-free Eye-Tracking in Retinal Projection Augmented Reality Glasses 用于视网膜投影增强现实眼镜眼球跟踪的全息单像素立体相机传感器
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529616
Johannes Meyer, Tobias Wilm, Reinhold Fiess, T. Schlebusch, Wilhelm Stork, Enkelejda Kasneci
{"title":"A Holographic Single-Pixel Stereo Camera Sensor for Calibration-free Eye-Tracking in Retinal Projection Augmented Reality Glasses","authors":"Johannes Meyer, Tobias Wilm, Reinhold Fiess, T. Schlebusch, Wilhelm Stork, Enkelejda Kasneci","doi":"10.1145/3517031.3529616","DOIUrl":"https://doi.org/10.1145/3517031.3529616","url":null,"abstract":"Eye-tracking is a key technology for future retinal projection based AR glasses as it enables techniques such as foveated rendering or gaze-driven exit pupil steering, which both increases the system’s overall performance. However, two of the major challenges video oculography systems face are robust gaze estimation in the presence of glasses slippage, paired with the necessity of frequent sensor calibration. To overcome these challenges, we propose a novel, calibration-free eye-tracking sensor for AR glasses based on a highly transparent holographic optical element (HOE) and a laser scanner. We fabricate a segmented HOE generating two stereo images of the eye-region. A single-pixel detector in combination with our stereo reconstruction algorithm is used to precisely calculate the gaze position. In our laboratory setup we demonstrate a calibration-free accuracy of 1.35° achieved by our eye-tracking sensor; highlighting the sensor’s suitability for consumer AR glasses.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123618887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Geometry-Aware Eye Image-To-Image Translation 几何感知眼图像到图像的转换
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532524
Conny Lu, Qian Zhang, K. Krishnakumar, Jixu Chen, H. Fuchs, S. Talathi, Kunlin Liu
{"title":"Geometry-Aware Eye Image-To-Image Translation","authors":"Conny Lu, Qian Zhang, K. Krishnakumar, Jixu Chen, H. Fuchs, S. Talathi, Kunlin Liu","doi":"10.1145/3517031.3532524","DOIUrl":"https://doi.org/10.1145/3517031.3532524","url":null,"abstract":"Recently, image-to-image translation (I2I) has met with great success in computer vision, but few works have paid attention to the geometric changes that occur during translation. The geometric changes are necessary to reduce the geometric gap between domains at the cost of breaking correspondence between translated images and original ground truth. We propose a novel geometry-aware semi-supervised method to preserve this correspondence while still allowing geometric changes. The proposed method takes a synthetic image-mask pair as input and produces a corresponding real pair. We also utilize an objective function to ensure consistent geometric movement of the image and mask through the translation. Extensive experiments illustrate that our method yields a 11.23% higher mean Intersection-Over-Union than the current methods on the downstream eye segmentation task. The generated image has a 15.9% decrease in Frechet Inception Distance indicating higher image quality.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124673093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Instant messaging multitasking while reading: a pilot eye-tracking study 阅读时的即时通讯多任务处理:一项实验性的眼球追踪研究
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529237
L. Altamura, L. Salmerón, Yvonne Kammerer
{"title":"Instant messaging multitasking while reading: a pilot eye-tracking study","authors":"L. Altamura, L. Salmerón, Yvonne Kammerer","doi":"10.1145/3517031.3529237","DOIUrl":"https://doi.org/10.1145/3517031.3529237","url":null,"abstract":"This pilot study analyzes the reading patterns of 15 German students while receiving instant messages through a smartphone, imitating an online conversation. With this pilot study, we aim to test the eye-tracking setup and methodology employed, in which we analyze specifically the moment in which participants return to the reading after answering the instant messages. We explore the relationships with reading comprehension performance and differences across readers, considering individual differences regarding reading habits and multitasking behavior.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130064719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSTMs can distinguish dental expert saccade behavior with high ”plaque-urracy” LSTMs可以识别牙菌斑准确度高的专家扫视行为。
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529631
Nora Castner, Jonas Frankemölle, C. Keutel, F. Huettig, Enkelejda Kasneci
{"title":"LSTMs can distinguish dental expert saccade behavior with high ”plaque-urracy”","authors":"Nora Castner, Jonas Frankemölle, C. Keutel, F. Huettig, Enkelejda Kasneci","doi":"10.1145/3517031.3529631","DOIUrl":"https://doi.org/10.1145/3517031.3529631","url":null,"abstract":"Much of the current expertise literature has found that domain specific tasks evoke different eye movements. However, research has yet to predict optimal image exploration using saccadic information and to identify and quantify differences in the search strategies between learners, intermediates, and expert practitioners. By employing LSTMs for scanpath classification, we found saccade features over time could distinguish all groups at high accuracy. The most distinguishing features were saccade velocity peak (72%), length (70%), and velocity average (68%). These findings promote the holistic theory of expert visual exploration that experts can quickly process the whole scene using longer and more rapid saccade behavior initially. The potential to integrate expertise model development from saccadic scanpath features into intelligent tutoring systems is the ultimate inspiration for our research. Additionally, this model is not confined to visual exploration in dental xrays, rather it can extend to other medical domains.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128959914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
On the Use of Distribution-based Metrics for the Evaluation of Drivers’ Fixation Maps Against Spatial Baselines 基于分布的指标在空间基线下驾驶员注视图评价中的应用
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529629
Jaime Maldonado, Lino Antoni Giefer
{"title":"On the Use of Distribution-based Metrics for the Evaluation of Drivers’ Fixation Maps Against Spatial Baselines","authors":"Jaime Maldonado, Lino Antoni Giefer","doi":"10.1145/3517031.3529629","DOIUrl":"https://doi.org/10.1145/3517031.3529629","url":null,"abstract":"A distinctive characteristic of human driver behavior is the spatial bias of gaze allocation toward the vanishing point of the road. This behavior can be evaluated by comparing fixation maps against a spatial-bias baseline using typical metrics such as the Pearson’s Correlation Coefficient (CC) and the Kullback-Leibler divergence (KL). CC and KL penalize false positives and negatives differently, which implies that they can be affected by the characteristics of the baseline. In this paper, we analyze the use of CC and KL for the evaluation of drivers’ fixation maps against two types of spatial-bias baselines: baselines obtained from recorded fixation maps (data-based) and 2D-Gaussian baselines (function-based). Our results indicate that the use of CC can lead to misleading interpretations due to single fixations outside of the spatial bias area when compared to data-based baselines. Thus, we argue that KL and CC should be considered simultaneously under specific modeling assumptions.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117178712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信