Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach.

IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY
Journal of Eye Movement Research Pub Date : 2025-08-07 eCollection Date: 2025-08-01 DOI:10.3390/jemr18040035
Paweł Cybulski
{"title":"Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach.","authors":"Paweł Cybulski","doi":"10.3390/jemr18040035","DOIUrl":null,"url":null,"abstract":"<p><p>Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol-central or peripheral-can be predicted using eye-tracking data and machine learning techniques. Two datasets were analyzed, each derived from separate studies involving visual search tasks with varying map characteristics. A comprehensive set of eye movement features, including fixation duration, saccade amplitude, and gaze dispersion, were extracted and standardized. Feature selection and polynomial interaction terms were applied to enhance model performance. Twelve supervised classification algorithms were tested, including Random Forest, Gradient Boosting, and Support Vector Machines. The models were evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. Results show that models trained on the first dataset achieved higher accuracy and class separation, with AdaBoost and Gradient Boosting performing best (accuracy = 0.822; ROC-AUC > 0.86). In contrast, the second dataset presented greater classification challenges, despite high recall in some models. Feature importance analysis revealed that fixation standard deviation as a proxy for gaze dispersion, particularly along the vertical axis, was the most predictive metric. These findings suggest that gaze behavior can reliably indicate the spatial focus of visual search, providing valuable insight for the development of adaptive, gaze-aware cartographic interfaces.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 4","pages":"35"},"PeriodicalIF":2.8000,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387524/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Eye Movement Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/jemr18040035","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Visual search is a core component of map reading, influenced by both cartographic design and human perceptual processes. This study investigates whether the location of a target cartographic symbol-central or peripheral-can be predicted using eye-tracking data and machine learning techniques. Two datasets were analyzed, each derived from separate studies involving visual search tasks with varying map characteristics. A comprehensive set of eye movement features, including fixation duration, saccade amplitude, and gaze dispersion, were extracted and standardized. Feature selection and polynomial interaction terms were applied to enhance model performance. Twelve supervised classification algorithms were tested, including Random Forest, Gradient Boosting, and Support Vector Machines. The models were evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. Results show that models trained on the first dataset achieved higher accuracy and class separation, with AdaBoost and Gradient Boosting performing best (accuracy = 0.822; ROC-AUC > 0.86). In contrast, the second dataset presented greater classification challenges, despite high recall in some models. Feature importance analysis revealed that fixation standard deviation as a proxy for gaze dispersion, particularly along the vertical axis, was the most predictive metric. These findings suggest that gaze behavior can reliably indicate the spatial focus of visual search, providing valuable insight for the development of adaptive, gaze-aware cartographic interfaces.

Abstract Image

基于眼动追踪数据和机器学习方法的地图符号位置预测。
视觉搜索是地图阅读的核心组成部分,受地图设计和人类感知过程的影响。本研究探讨了是否可以使用眼动追踪数据和机器学习技术来预测目标地图符号的位置(中心或外围)。分析了两个数据集,每个数据集都来自不同的研究,涉及不同地图特征的视觉搜索任务。提取并标准化了一套全面的眼动特征,包括注视时间、扫视幅度和凝视分散。采用特征选择和多项式交互项来提高模型性能。测试了12种监督分类算法,包括随机森林、梯度增强和支持向量机。采用准确性、精密度、召回率、f1评分和ROC-AUC对模型进行评价。结果表明,在第一个数据集上训练的模型获得了更高的准确率和类别分离,其中AdaBoost和Gradient Boosting表现最好(准确率= 0.822;ROC-AUC > 0.86)。相比之下,第二个数据集提出了更大的分类挑战,尽管在一些模型中有很高的召回率。特征重要性分析显示,注视标准偏差作为凝视分散的代理,特别是沿着垂直轴,是最具预测性的度量。这些发现表明,凝视行为可以可靠地指示视觉搜索的空间焦点,为自适应、凝视感知的地图界面的开发提供了有价值的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.90
自引率
33.30%
发文量
10
审稿时长
10 weeks
期刊介绍: The Journal of Eye Movement Research is an open-access, peer-reviewed scientific periodical devoted to all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas,
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信