ACM Symposium on Eye Tracking Research and Applications最新文献

筛选
英文 中文
Analysis of iris obfuscation: Generalising eye information processes for privacy studies in eye tracking. 虹膜混淆分析:眼动追踪隐私研究中眼信息处理的泛化。
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448017.3457385
Anton Molbjerg Eskildsen, D. Hansen
{"title":"Analysis of iris obfuscation: Generalising eye information processes for privacy studies in eye tracking.","authors":"Anton Molbjerg Eskildsen, D. Hansen","doi":"10.1145/3448017.3457385","DOIUrl":"https://doi.org/10.1145/3448017.3457385","url":null,"abstract":"We present a framework to model and evaluate obfuscation methods for removing sensitive information in eye-tracking. The focus is on preventing iris-pattern identification. Candidate methods have to be effective at removing information while retaining high utility for gaze estimation. We propose several obfuscation methods that drastically outperform existing ones. A stochastic grid-search is used to determine optimal method parameters and evaluate the model framework. Precise obfuscation and gaze effects are measured for selected parameters. Two attack scenarios are considered and evaluated. We show that large datasets are susceptible to probabilistic attacks, even with seemingly effective obfuscation methods. However, additional data is needed to more accurately access the probabilistic security.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131234978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Determining Differences in Reading Behavior Between Experts and Novices by Investigating Eye Movement on Source Code Constructs During a Bug Fixing Task 通过研究修复错误过程中对源代码结构的眼动来确定专家和新手阅读行为的差异
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448018.3457424
Salwa Aljehane, Bonita Sharif, Jonathan I. Maletic
{"title":"Determining Differences in Reading Behavior Between Experts and Novices by Investigating Eye Movement on Source Code Constructs During a Bug Fixing Task","authors":"Salwa Aljehane, Bonita Sharif, Jonathan I. Maletic","doi":"10.1145/3448018.3457424","DOIUrl":"https://doi.org/10.1145/3448018.3457424","url":null,"abstract":"This research compares the eye movement of expert and novice programmers working on a bug fixing task. This comparison aims at investigating which source code elements programmers focus on when they review Java source code. Programmer code reading behaviors at the line and term levels are used to characterize the differences between experts and novices. The study analyzes programmers’ eye movements over identified source code areas using an existing eye tracking dataset of 12 experts and 10 novices. The results show that the difference between experts and novices is significant in source code element coverage. Specifically, novices read more method signatures, variable declarations, identifiers, and keywords compared to experts. However, experts are better at finishing the task using fewer source code elements when compared to novices. Moreover, programmers tend to focus on the method signatures the most while reading the code.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126951354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Repetition effects in task-driven eye movement analyses after longer time-spans 任务驱动眼动分析的重复效应
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448018.3458005
T. Berger, Michael Raschke
{"title":"Repetition effects in task-driven eye movement analyses after longer time-spans","authors":"T. Berger, Michael Raschke","doi":"10.1145/3448018.3458005","DOIUrl":"https://doi.org/10.1145/3448018.3458005","url":null,"abstract":"Visualizations are an important tool in risk management to support decision-making of recipients of risk reports. Many trainings aim at helping managers to better understand how to read such visualizations. In this paper we present first results of an ongoing large study on the effect of repeated presentation of risk visualizations from annual reports. This is of importance to find out if such repetitions have an effect on accuracy and the behavior of readers. Contrary to other studies we had longer time-spans of months and weeks between two trials. We found that fixation durations are different after second presentation and that the number of fixations generally are lower. We also analyzed scan paths, indicating that regions that are more semantically meaningful are more often in the center of attention. We call for more studies with longer time spans between two trials as we found some interesting patterns.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121163673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gaze+Lip: Rapid, Precise and Expressive Interactions Combining Gaze Input and Silent Speech Commands for Hands-free Smart TV Control 凝视+嘴唇:快速,精确和富有表现力的互动,结合凝视输入和无声语音命令,实现免提智能电视控制
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448018.3458011
Zixiong Su, Xinlei Zhang, N. Kimura, J. Rekimoto
{"title":"Gaze+Lip: Rapid, Precise and Expressive Interactions Combining Gaze Input and Silent Speech Commands for Hands-free Smart TV Control","authors":"Zixiong Su, Xinlei Zhang, N. Kimura, J. Rekimoto","doi":"10.1145/3448018.3458011","DOIUrl":"https://doi.org/10.1145/3448018.3458011","url":null,"abstract":"As eye-tracking technologies develop, gaze becomes more and more popular as an input modality. However, in situations that require fast and precise object selection, gaze is hard to use because of limited accuracy. We present Gaze+Lip, a hands-free interface that combines gaze and lip reading to enable rapid and precise remote controls when interacting with big displays. Gaze+Lip takes advantage of gaze for target selection and leverages silent speech to ensure accurate and reliable command execution in noisy scenarios such as watching TV or playing videos on a computer. For evaluation, we implemented a system on a TV, and conducted an experiment to compare our method with the dwell-based gaze-only input method. Results showed that Gaze+Lip outperformed the gaze-only approach in accuracy and input speed. Furthermore, subjective evaluations indicated that Gaze+Lip is easy to understand, easy to use, and has higher perceived speed than the gaze-only approach.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114445929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Effects of measurement time and presentation size conditions on biometric identification using eye movements 测量时间和呈现尺寸条件对眼动生物特征识别的影响
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448018.3458616
Yudai Niitsu, M. Nakayama
{"title":"Effects of measurement time and presentation size conditions on biometric identification using eye movements","authors":"Yudai Niitsu, M. Nakayama","doi":"10.1145/3448018.3458616","DOIUrl":"https://doi.org/10.1145/3448018.3458616","url":null,"abstract":"Biometric identification using eye movements is an identification method with low risk of spoofing, however the problem with it is that the eye movement measurement time is long. In this paper, we studied pattern lock authentication using eye movement features. As a result of 1-to-N identification using the data of six subjects, it was found that the identification rate was maximized at a measurement time of 3 seconds, indicating that it was possible to identify individuals in a short measurement time. In addition, we examined the effects of the data measurement time conditions and the presentation size on the rate of identification. The condition which maximized the identification rate was a measurement time limit of 3 seconds or the presentation of a stimulus pattern using a visual angle of 27.20°. Furthermore, the Mel-Frequency Cepstral Coefficient (MFCC) of the viewpoint coordinates and the diameter of the pupil were the features that contributed most to identification.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122665200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using Deep Learning to Classify Saccade Direction from Brain Activity 利用深度学习从大脑活动中分类扫视方向
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448018.3458014
Ard Kastrati, M. Płomecka, Roger Wattenhofer, N. Langer
{"title":"Using Deep Learning to Classify Saccade Direction from Brain Activity","authors":"Ard Kastrati, M. Płomecka, Roger Wattenhofer, N. Langer","doi":"10.1145/3448018.3458014","DOIUrl":"https://doi.org/10.1145/3448018.3458014","url":null,"abstract":"We present first insights into our project that aims to develop an Electroencephalography (EEG) based Eye-Tracker. Our approach is tested and validated on a large dataset of simultaneously recorded EEG and infrared video-based Eye-Tracking, serving as ground truth. We compared several state-of-the-art neural network architectures for time series classification: InceptionTime, EEGNet, and investigated other architectures such as convolutional neural networks (CNN) with Xception modules and Pyramidal CNN. We prepared and tested these architectures with our rich dataset and obtained a remarkable accuracy of the left/right saccades direction classification (94.8 %) for the InceptionTime network, after hyperparameter tuning.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116575401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Eye, Robot: Calibration Challenges and Potential Solutions for Wearable Eye Tracking in Individuals with Eccentric Fixation 眼,机器人:偏心注视个体可穿戴眼动追踪的校准挑战和潜在解决方案
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3450341.3458489
Kassia Love, A. Velisar, N. Shanidze
{"title":"Eye, Robot: Calibration Challenges and Potential Solutions for Wearable Eye Tracking in Individuals with Eccentric Fixation","authors":"Kassia Love, A. Velisar, N. Shanidze","doi":"10.1145/3450341.3458489","DOIUrl":"https://doi.org/10.1145/3450341.3458489","url":null,"abstract":"Loss of the central retina, including the fovea, can lead to a loss of visual acuity and oculomotor deficits, and thus have profound effects on day-to-day tasks. Recent advances in head-mounted, 3D eye tracking have allowed researchers to extend studies in this population to a broader set of daily tasks and more naturalistic behaviors and settings. However, decreases in fixational stability, multiple fixational loci and their uncertain role as oculomotor references, as well as eccentric fixation all provide additional challenges for calibration and collection of eye movement data. Here we quantify reductions in calibration accuracy relative to fixation eccentricity, and suggest a robotic calibration and validation tool that will allow for future developments of calibration and tracking algorithms designed with this population in mind.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132543977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Eye Tracking Analysis of Code Layout, Crowding and Dyslexia - An Open Data Set 代码布局、拥挤和阅读障碍的眼动追踪分析——一个开放数据集
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448018.3457420
I. McChesney, R. Bond
{"title":"Eye Tracking Analysis of Code Layout, Crowding and Dyslexia - An Open Data Set","authors":"I. McChesney, R. Bond","doi":"10.1145/3448018.3457420","DOIUrl":"https://doi.org/10.1145/3448018.3457420","url":null,"abstract":"Within computer science there is increasing recognition of the need for research data sets to be openly available to facilitate transparency and reproducibility of studies. In this short paper an open data set is described which contains the eye tracking recordings from an experiment in which programmers with and without dyslexia reviewed and described Java code. The aim of the experiment was to investigate if crowding in code layout affected the gaze behaviour and program comprehension of programmers with dyslexia. The data set provides data from 30 participants (14 dyslexia, 16 control) and their eye gaze behaviour in reviewing three small Java programs in various combinations of crowded and spaced configurations. The key features of the data set are described and observations made on the effect of alternative area of interest configurations. The paper concludes with some observations on enhancing access to data sets through metadata, data provenance and visualizations.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130143648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Gaze+Hold: Eyes-only Direct Manipulation with Continuous Gaze Modulated by Closure of One Eye 凝视+保持:只对眼睛的直接操作,通过闭上一只眼睛来调节持续凝视
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448017.3457381
Argenis Ramirez Gomez, Christopher Clarke, Ludwig Sidenmark, Hans-Werner Gellersen
{"title":"Gaze+Hold: Eyes-only Direct Manipulation with Continuous Gaze Modulated by Closure of One Eye","authors":"Argenis Ramirez Gomez, Christopher Clarke, Ludwig Sidenmark, Hans-Werner Gellersen","doi":"10.1145/3448017.3457381","DOIUrl":"https://doi.org/10.1145/3448017.3457381","url":null,"abstract":"The eyes are coupled in their gaze function and therefore usually treated as a single input channel, limiting the range of interactions. However, people are able to open and close one eye while still gazing with the other. We introduce Gaze+Hold as an eyes-only technique that builds on this ability to leverage the eyes as separate input channels, with one eye modulating the state of interaction while the other provides continuous input. Gaze+Hold enables direct manipulation beyond pointing which we explore through the design of Gaze+Hold techniques for a range of user interface tasks. In a user study, we evaluated performance, usability and user’s spontaneous choice of eye for modulation of input. The results show that users are effective with Gaze+Hold. The choice of dominant versus non-dominant eye had no effect on performance, perceived usability and workload. This is significant for the utility of Gaze+Hold as it affords flexibility for mapping of either eye in different configurations.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130259682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
HGaze Typing: Head-Gesture Assisted Gaze Typing 注视打字:头部手势辅助注视打字
ACM Symposium on Eye Tracking Research and Applications Pub Date : 2021-05-25 DOI: 10.1145/3448017.3457379
Wenxin Feng, Jiangnan Zou, Andrew T. N. Kurauchi, C. Morimoto, Margrit Betke
{"title":"HGaze Typing: Head-Gesture Assisted Gaze Typing","authors":"Wenxin Feng, Jiangnan Zou, Andrew T. N. Kurauchi, C. Morimoto, Margrit Betke","doi":"10.1145/3448017.3457379","DOIUrl":"https://doi.org/10.1145/3448017.3457379","url":null,"abstract":"This paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of head gestures with the speed of gaze inputs to provide efficient and comfortable dwell-free text entry. HGaze Typing uses gaze path information to compute candidate words and allows explicit activation of common text entry commands, such as selection, deletion, and revision, by using head gestures (nodding, shaking, and tilting). By adding a head-based input channel, HGaze Typing reduces the size of the screen regions for cancel/deletion buttons and the word candidate list, which are required by most eye-typing interfaces. A user study finds HGaze Typing outperforms a dwell-time-based keyboard in efficacy and user satisfaction. The results demonstrate that the proposed method of integrating gaze and head-movement inputs can serve as an effective interface for text entry and is robust to unintended selections.","PeriodicalId":226088,"journal":{"name":"ACM Symposium on Eye Tracking Research and Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114712535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信