Algorithmic gaze annotation for mobile eye-tracking.

IF 3.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Daniel Mueller, David Mann
{"title":"Algorithmic gaze annotation for mobile eye-tracking.","authors":"Daniel Mueller, David Mann","doi":"10.3758/s13428-025-02803-2","DOIUrl":null,"url":null,"abstract":"<p><p>Mobile eye-tracking is increasingly used to study human behavior in situ; however, the analysis of the footage is typically performed manually and therefore is slow and laborious. The aim of this study was to examine the extent to which the footage obtained using mobile eye-tracking could be annotated automatically using computer vision algorithms. We developed an open-source Python package that combined two computer vision algorithms to automatically annotate human-body-related areas of interest when two participants interacted with each other. To validate the algorithm, three experienced human raters coded the gaze direction with respect to one of seven a priori defined areas of interest during the task. To test the reliability of the algorithm, the agreement between the human raters was compared with the results obtained from the algorithm. A total of 1,188 frames from 13 trials were compared, with the results revealing substantial agreement between the algorithm and human raters (Krippendorff's alpha = 0.61). The algorithm strictly annotated whether gaze was within or outside of the specified areas of interest, whereas human raters seemed to apply a tolerance when gaze was lying slightly outside the areas of interest. In sum, the computer algorithmic approach appears to provide a valid means of automatically annotating mobile eye-tracking footage in highly dynamic contexts. The possibility of automatically annotating eye-tracking footage of human interactions allows for automatic assessment of visual attention, gaze, and intentions across sectors such as educational settings, pedestrian navigation, and sport.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 10","pages":"290"},"PeriodicalIF":3.9000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443921/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-025-02803-2","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Mobile eye-tracking is increasingly used to study human behavior in situ; however, the analysis of the footage is typically performed manually and therefore is slow and laborious. The aim of this study was to examine the extent to which the footage obtained using mobile eye-tracking could be annotated automatically using computer vision algorithms. We developed an open-source Python package that combined two computer vision algorithms to automatically annotate human-body-related areas of interest when two participants interacted with each other. To validate the algorithm, three experienced human raters coded the gaze direction with respect to one of seven a priori defined areas of interest during the task. To test the reliability of the algorithm, the agreement between the human raters was compared with the results obtained from the algorithm. A total of 1,188 frames from 13 trials were compared, with the results revealing substantial agreement between the algorithm and human raters (Krippendorff's alpha = 0.61). The algorithm strictly annotated whether gaze was within or outside of the specified areas of interest, whereas human raters seemed to apply a tolerance when gaze was lying slightly outside the areas of interest. In sum, the computer algorithmic approach appears to provide a valid means of automatically annotating mobile eye-tracking footage in highly dynamic contexts. The possibility of automatically annotating eye-tracking footage of human interactions allows for automatic assessment of visual attention, gaze, and intentions across sectors such as educational settings, pedestrian navigation, and sport.

Abstract Image

Abstract Image

Abstract Image

移动眼动追踪的注视标注算法。
移动眼动追踪越来越多地用于研究人类的原位行为;然而,镜头的分析通常是手动执行的,因此是缓慢和费力的。本研究的目的是检验使用移动眼动追踪获得的镜头在多大程度上可以使用计算机视觉算法自动注释。我们开发了一个开源的Python包,它结合了两种计算机视觉算法,当两个参与者相互交互时,可以自动注释与人体相关的感兴趣区域。为了验证该算法,三名经验丰富的人类评分员在任务过程中根据七个先验定义的兴趣区域中的一个对凝视方向进行编码。为了检验算法的可靠性,将人工评分者之间的一致性与算法得到的结果进行比较。共比较了13个试验的1188帧,结果显示算法与人类评分者之间存在很大的一致性(Krippendorff的alpha = 0.61)。该算法严格标注了凝视是否在特定的兴趣区域内或之外,而人类评分者似乎在凝视稍微超出兴趣区域时施加了宽容。总之,计算机算法方法似乎提供了一种在高度动态环境中自动注释移动眼动追踪镜头的有效方法。自动标注人类互动的眼球追踪镜头的可能性允许自动评估视觉注意力、凝视和意图,如教育环境、行人导航和体育等领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.30
自引率
9.30%
发文量
266
期刊介绍: Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信