2022 Symposium on Eye Tracking Research and Applications最新文献

筛选
英文 中文
Guiding Game Design Decisions via Eye-Tracking: An Indie Game Case Study 通过眼球追踪引导游戏设计决策:独立游戏案例研究
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529613
Borna Fatehi, C. Harteveld, Christoffer Holmgård
{"title":"Guiding Game Design Decisions via Eye-Tracking: An Indie Game Case Study","authors":"Borna Fatehi, C. Harteveld, Christoffer Holmgård","doi":"10.1145/3517031.3529613","DOIUrl":"https://doi.org/10.1145/3517031.3529613","url":null,"abstract":"Increasingly, videogame designers harness game usability techniques to inform design decisions and reduce costs, but advanced techniques are not commonplace yet—perhaps due to failing to see their benefits or a lack of expertise and facilities. In this paper, we present a case study that demonstrates the value of using eye-tracking to help guide the design process of a narrative game by an independent game studio. The designers of this studio had defined four options of how text should be presented to players (Horizontal, Vertical, Subtitle, and Messenger), and were deadlocked on what to choose. As part of a collaboration with a university, a within-subjects eye-tracking study was conducted with 15 participants to evaluate the options. Combining the eye-tracking data with stated user preferences, designers reached a consensus that the game in question benefits from strategies derived from messenger on smart-phone interaction design—later confirmed with statistical analysis of fixation transition on elements of interest on screen. The use of eye-tracking broke the deadlock and helped inform a final design decision, as demonstrated by a decision making impact analysis that describes the design process. The paper ends with a reflection on the applicability of this academia-industry model to other contexts.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116284253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Benefits of Depth Information for Head-Mounted Gaze Estimation 深度信息对头戴式凝视估计的好处
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529638
Stefan Stojanov, S. Talathi, Abhishek Sharma
{"title":"The Benefits of Depth Information for Head-Mounted Gaze Estimation","authors":"Stefan Stojanov, S. Talathi, Abhishek Sharma","doi":"10.1145/3517031.3529638","DOIUrl":"https://doi.org/10.1145/3517031.3529638","url":null,"abstract":"In this work, we investigate the hypothesis that adding 3D information of the periocular region to an end-to-end gaze-estimation network can improve gaze-estimation accuracy in the presence of slippage, which occurs quite commonly for head-mounted AR/VR devices. To this end, using UnityEyes we generate a simulated dataset with RGB and depth-maps of the eye with varying camera placement to simulate slippage artifacts. We generate different noise profiles for the depth-maps to simulate depth sensor noise artifacts. Using this data, we investigate the effects of different fusion techniques for combining image and depth information for gaze estimation. Our experiments show that under an attention-based fusion scheme, 3D information can significantly improve gaze-estimation and compensates well for slippage induced variability. Our finding supports augmenting 2D cameras with depth-sensors for the development of robust end-to-end appearance based gaze-estimation systems.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"22 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125688498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
“The more you explore, the less you remember”: unraveling the effects of scene clutter on learning and memory for targets “你探索的越多,记住的越少”:揭示场景杂乱对目标学习和记忆的影响
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529623
Christos Gkoumas, Andria Shimi
{"title":"“The more you explore, the less you remember”: unraveling the effects of scene clutter on learning and memory for targets","authors":"Christos Gkoumas, Andria Shimi","doi":"10.1145/3517031.3529623","DOIUrl":"https://doi.org/10.1145/3517031.3529623","url":null,"abstract":"We are constantly exposed to visually rich, oftentimes cluttered, environments. Previous studies have demonstrated the negative effects of clutter on visual search behavior and various oculomotor metrics. However, little is known about the consequences of clutter on other cognitive processes, like learning and memory. In the present study, we explored the effects of scene clutter on gaze behavior during a learning task and whether these gaze patterns influenced memory performance in a later cued recall task. Using spatial density analysis, we found that a higher degree of scene clutter resulted in more dispersed gaze behavior during the learning task. Additionally, participants recalled target locations less precisely in cluttered than in uncluttered scenes during the recall task. These findings have important implications for theories linking exploratory viewing with memory performance as well as for making recommendations on how interior spaces could be better organized to facilitate daily living.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131044913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Eye Opener on the Use of Machine Learning in Eye Movement Based Authentication 机器学习在眼动认证中的应用
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3531631
Siyuan Peng, N. A. Madi
{"title":"An Eye Opener on the Use of Machine Learning in Eye Movement Based Authentication","authors":"Siyuan Peng, N. A. Madi","doi":"10.1145/3517031.3531631","DOIUrl":"https://doi.org/10.1145/3517031.3531631","url":null,"abstract":"The viability and need for eye movement-based authentication has been well established in light of the recent adoption of Virtual Reality headsets and Augmented Reality glasses. Previous research has demonstrated the practicality of eye movement-based authentication, but there still remains space for improvement in achieving higher identification accuracy. In this study, we focus on incorporating linguistic features in eye movement based authentication, and we compare our approach to authentication based purely on common first-order metrics across 9 machine learning models. Using GazeBase, a large eye movement dataset with 322 participants, and the CELEX lexical database, we show that AdaBoost classifier is the best performing model with an average F1 score of 74.6%. More importantly, we show that the use of linguistic features increased the accuracy of most classification models. Our results provide insights on the use of machine learning models, and motivate more work on incorporating text analysis in eye movement based authentication.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134102211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Trans-Saccadic Extrafoveal Preview Effect is Modulated by Object Visibility 对象可见性调节跨眼球中央凹外预览效果
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529622
Xiaoyi Liu, Christoph Huber-Huber, D. Melcher
{"title":"The Trans-Saccadic Extrafoveal Preview Effect is Modulated by Object Visibility","authors":"Xiaoyi Liu, Christoph Huber-Huber, D. Melcher","doi":"10.1145/3517031.3529622","DOIUrl":"https://doi.org/10.1145/3517031.3529622","url":null,"abstract":"We used a gaze-contingent eye-tracking setup to investigate how peripheral vision before the saccade affects post-saccadic foveal processing. Studies have revealed robust changes in foveal processing when the target is available in peripheral vision (the extrafoveal preview effect). To further characterize the role of peripheral vision, we adopted a paradigm where an upright/inverted extrafoveal face stimulus was shown and changed orientation (invalid preview) on 50% of trials during the saccade. Invalid preview significantly reduced post-saccadic discrimination performance compared to valid preview (aka preview effect). In addition, the preview face varied in eccentricity and added noise which affected its visibility. Face visibility was operationalized by a lateralized face identification task, run in a separate session. A mixed model analysis suggests that visibility modulated the preview effect. Overall, these findings constrain theories of how preview effects might influence perception under natural viewing conditions.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123951719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Research on Time Series Evaluation of Cognitive Load Factors using Features of Eye Movement 基于眼动特征的认知负荷因子时间序列评价研究
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529236
Tomomi Okano, M. Nakayama
{"title":"Research on Time Series Evaluation of Cognitive Load Factors using Features of Eye Movement","authors":"Tomomi Okano, M. Nakayama","doi":"10.1145/3517031.3529236","DOIUrl":"https://doi.org/10.1145/3517031.3529236","url":null,"abstract":"The relationship between ocular metrics and factor ratings for mental workloads are examined using a Bayesian statistical state-space modeling technique. During a visual search task experiment, microsaccade frequency and pupil size were observed as measures of mental workload. Individual mental workloads were measured using a 6 factor NASA-TLX rating scale. The models calculated generalized temporal changes of microsaccade frequency and pupil size during tasks. The contributions of factors of mental workload are examined using effect size. Also, chronological analysis was introduced to detect the reactions of the metrics during tasks. The results suggest that the response selectivity of microsaccades and pupil size can be used as factors of mental workload.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129168705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Feasibility of a Device for Gaze Interaction by Visually-Evoked Brain Signals 一种视觉诱发脑信号凝视交互装置的可行性
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529232
Baosheng James Hou, J. P. Hansen, Cihan Uyanik, Per Baekgaard, S. Puthusserypady, Jacopo M. Araujo, I. MacKenzie
{"title":"Feasibility of a Device for Gaze Interaction by Visually-Evoked Brain Signals","authors":"Baosheng James Hou, J. P. Hansen, Cihan Uyanik, Per Baekgaard, S. Puthusserypady, Jacopo M. Araujo, I. MacKenzie","doi":"10.1145/3517031.3529232","DOIUrl":"https://doi.org/10.1145/3517031.3529232","url":null,"abstract":"A dry-electrode head-mounted sensor for visually-evoked electroencephalogram (EEG) signals has been introduced to the gamer market, and provides wireless, low-cost tracking of a user’s gaze fixation on target areas in real-time. Unlike traditional EEG sensors, this new device is easy to set up for non-professionals. We conducted a Fitts’ law study (N = 6) and found the mean throughput (TP) to be 0.82 bits/s. The sensor yielded robust performance with error rates below 1%. The overall median activation time (AT) was 2.35 s with a minuscule difference between one or nine concurrent targets. We discuss whether the method might supplement camera-based gaze interaction, for example, in gaze typing or wheelchair control, and note some limitations, such as a slow AT, the difficulty of calibration with thick hair, and the limit of 10 concurrent targets.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116704451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Regressive Saccadic Eye Movements on Fake News 假新闻的退行性跳眼运动
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529619
Efe Bozkir, G. Kasneci, S. Utz, Enkelejda Kasneci
{"title":"Regressive Saccadic Eye Movements on Fake News","authors":"Efe Bozkir, G. Kasneci, S. Utz, Enkelejda Kasneci","doi":"10.1145/3517031.3529619","DOIUrl":"https://doi.org/10.1145/3517031.3529619","url":null,"abstract":"With the increasing use of the Internet, people encounter a variety of news in online media and social media every day. For digital content without fact-checking mechanisms, it is likely that people perceive fake news as real when they do not have extensive knowledge about the news topic. In this paper, we study human eye movements when reading fake news and real news. Our results suggest that people regress more with their eyes when reading fake news, while the time until the first fixation in the text area of interest is not a distinguishing factor between real and fake content. Our results show that although the truthfulness of the content is not known to people in advance, their visual behavior differs when reading such content, indicating a higher level of confusion when reading fake content.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126619265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Iris Print Attack Detection using Eye Movement Signals 利用眼动信号检测虹膜打印攻击
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3532521
M. H. Raju, D. Lohr, Oleg V. Komogortsev
{"title":"Iris Print Attack Detection using Eye Movement Signals","authors":"M. H. Raju, D. Lohr, Oleg V. Komogortsev","doi":"10.1145/3517031.3532521","DOIUrl":"https://doi.org/10.1145/3517031.3532521","url":null,"abstract":"Iris-based biometric authentication is a wide-spread biometric modality due to its accuracy, among other benefits. Improving the resistance of iris biometrics to spoofing attacks is an important research topic. Eye tracking and iris recognition devices have similar hardware that consists of a source of infra-red light and an image sensor. This similarity potentially enables eye tracking algorithms to run on iris-driven biometrics systems. The present work advances the state-of-the-art of detecting iris print attacks, wherein an imposter presents a printout of an authentic user’s iris to a biometrics system. The detection of iris print attacks is accomplished via analysis of the captured eye movement signal with a deep learning model. Results indicate better performance of the selected approach than the previous state-of-the-art.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125183079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Look & Turn: One-handed and Expressive Menu Interaction by Gaze and Arm Turns in VR 看与转:在虚拟现实中通过凝视和手臂转动实现单手和表情菜单交互
2022 Symposium on Eye Tracking Research and Applications Pub Date : 2022-06-08 DOI: 10.1145/3517031.3529233
Katharina Reiter, Ken Pfeuffer, Augusto Esteves, Tim Mittermeier, Florian Alt
{"title":"Look & Turn: One-handed and Expressive Menu Interaction by Gaze and Arm Turns in VR","authors":"Katharina Reiter, Ken Pfeuffer, Augusto Esteves, Tim Mittermeier, Florian Alt","doi":"10.1145/3517031.3529233","DOIUrl":"https://doi.org/10.1145/3517031.3529233","url":null,"abstract":"A user’s free hands provide an intuitive platform to position and design virtual menu interfaces. We explore how the hands and eyes can be integrated in the design of hand-attached menus. We synthesise past work from the literature and derive a design space that crosses properties of menu systems with an hand and eye input vocabulary. From this, we devise three menu systems that are based on the novel concept of Look & Turn: gaze indicates menu selection, and rotational turn of the wrist navigates menu and manipulates continuous parameters. Each technique allows users to interact with the hand-attached menu using the same hand, while keeping the other hand free for drawing. Based on a VR prototype that combines eye-tracking and glove-based finger tracking, we discuss first insights on technical and human factors of the promising interaction concept.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131297196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信