Eye-Based Recognition of User Traits and States-A Systematic State-of-the-Art Review.

IF 1.3 4区 心理学 Q3 OPHTHALMOLOGY
Moritz Langner, Peyman Toreini, Alexander Maedche
{"title":"Eye-Based Recognition of User Traits and States-A Systematic State-of-the-Art Review.","authors":"Moritz Langner, Peyman Toreini, Alexander Maedche","doi":"10.3390/jemr18020008","DOIUrl":null,"url":null,"abstract":"<p><p>Eye-tracking technology provides high-resolution information about a user's visual behavior and interests. Combined with advances in machine learning, it has become possible to recognize user traits and states using eye-tracking data. Despite increasing research interest, a comprehensive systematic review of eye-based recognition approaches has been lacking. This study aimed to fill this gap by systematically reviewing and synthesizing the existing literature on the machine-learning-based recognition of user traits and states using eye-tracking data following PRISMA 2020 guidelines. The inclusion criteria focused on studies that applied eye-tracking data to recognize user traits and states with machine learning or deep learning approaches. Searches were performed in the ACM Digital Library and IEEE Xplore and the found studies were assessed for the risk of bias using standard methodological criteria. The data synthesis included a conceptual framework that covered the task, context, technology and data processing, and recognition targets. A total of 90 studies were included that encompassed a variety of tasks (e.g., visual, driving, learning) and contexts (e.g., computer screen, simulator, wild). The recognition targets included cognitive and affective states (e.g., emotions, cognitive workload) and user traits (e.g., personality, working memory). A set of various machine learning techniques, such as Support Vector Machines (SVMs), Random Forests, and deep learning models were applied to recognize user states and traits. This review identified state-of-the-art approaches and gaps, which highlighted the need for building up best practices, larger-scale datasets, and diversifying tasks and contexts. Future research should focus on improving the ecological validity, multi-modal approaches for robust user modeling, and developing gaze-adaptive systems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 2","pages":"8"},"PeriodicalIF":1.3000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12027520/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Eye Movement Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/jemr18020008","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Eye-tracking technology provides high-resolution information about a user's visual behavior and interests. Combined with advances in machine learning, it has become possible to recognize user traits and states using eye-tracking data. Despite increasing research interest, a comprehensive systematic review of eye-based recognition approaches has been lacking. This study aimed to fill this gap by systematically reviewing and synthesizing the existing literature on the machine-learning-based recognition of user traits and states using eye-tracking data following PRISMA 2020 guidelines. The inclusion criteria focused on studies that applied eye-tracking data to recognize user traits and states with machine learning or deep learning approaches. Searches were performed in the ACM Digital Library and IEEE Xplore and the found studies were assessed for the risk of bias using standard methodological criteria. The data synthesis included a conceptual framework that covered the task, context, technology and data processing, and recognition targets. A total of 90 studies were included that encompassed a variety of tasks (e.g., visual, driving, learning) and contexts (e.g., computer screen, simulator, wild). The recognition targets included cognitive and affective states (e.g., emotions, cognitive workload) and user traits (e.g., personality, working memory). A set of various machine learning techniques, such as Support Vector Machines (SVMs), Random Forests, and deep learning models were applied to recognize user states and traits. This review identified state-of-the-art approaches and gaps, which highlighted the need for building up best practices, larger-scale datasets, and diversifying tasks and contexts. Future research should focus on improving the ecological validity, multi-modal approaches for robust user modeling, and developing gaze-adaptive systems.

基于眼睛的用户特征和状态识别——系统的最新研究综述。
眼球追踪技术提供有关用户视觉行为和兴趣的高分辨率信息。结合机器学习的进步,使用眼动追踪数据识别用户特征和状态已经成为可能。尽管研究兴趣日益浓厚,但对基于眼睛的识别方法缺乏全面系统的综述。本研究旨在通过系统地回顾和综合基于机器学习的识别用户特征和状态的现有文献,根据PRISMA 2020指南使用眼动追踪数据来填补这一空白。纳入标准侧重于应用眼动追踪数据通过机器学习或深度学习方法识别用户特征和状态的研究。在ACM数字图书馆和IEEE explore中进行检索,并使用标准方法学标准评估发现的研究的偏倚风险。数据综合包括一个涵盖任务、背景、技术和数据处理以及识别目标的概念框架。共纳入了90项研究,其中包括各种任务(例如,视觉,驾驶,学习)和环境(例如,计算机屏幕,模拟器,野外)。识别目标包括认知和情感状态(如情绪、认知工作量)和用户特征(如个性、工作记忆)。采用支持向量机(svm)、随机森林和深度学习模型等多种机器学习技术来识别用户状态和特征。本次审查确定了最先进的方法和差距,强调需要建立最佳实践、更大规模的数据集以及多样化的任务和背景。未来的研究应集中在提高生态有效性、多模态方法的鲁棒用户建模和开发凝视自适应系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.90
自引率
33.30%
发文量
10
审稿时长
10 weeks
期刊介绍: The Journal of Eye Movement Research is an open-access, peer-reviewed scientific periodical devoted to all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas,
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信