Predicting observer's task from eye movement patterns during motion image analysis

Jutta Hild, M. Voit, Christian Kühnle, J. Beyerer
{"title":"Predicting observer's task from eye movement patterns during motion image analysis","authors":"Jutta Hild, M. Voit, Christian Kühnle, J. Beyerer","doi":"10.1145/3204493.3204575","DOIUrl":null,"url":null,"abstract":"Predicting an observer's tasks from eye movements during several viewing tasks has been investigated by several authors. This contribution adds task prediction from eye movements tasks occurring during motion image analysis: Explore, Observe, Search, and Track. For this purpose, gaze data was recorded from 30 human observers viewing a motion image sequence once under each task. For task decoding, the classification methods Random Forest, LDA, and QDA were used; features were fixation- or saccade-related measures. Best accuracy for prediction of the three tasks Observe, Search, Track from the 4-minute gaze data samples was 83.7% (chance level 33%) using Random Forest. Best accuracy for prediction of all four tasks from the gaze data samples containing the first 30 seconds of viewing was 59.3% (chance level 25%) using LDA. Accuracy decreased significantly for task prediction on small gaze data chunks of 5 and 3 seconds, being 45.3% and 38.0% (chance 25%) for the four tasks, and 52.3% and 47.7% (chance 33%) for the three tasks.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3204493.3204575","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Predicting an observer's tasks from eye movements during several viewing tasks has been investigated by several authors. This contribution adds task prediction from eye movements tasks occurring during motion image analysis: Explore, Observe, Search, and Track. For this purpose, gaze data was recorded from 30 human observers viewing a motion image sequence once under each task. For task decoding, the classification methods Random Forest, LDA, and QDA were used; features were fixation- or saccade-related measures. Best accuracy for prediction of the three tasks Observe, Search, Track from the 4-minute gaze data samples was 83.7% (chance level 33%) using Random Forest. Best accuracy for prediction of all four tasks from the gaze data samples containing the first 30 seconds of viewing was 59.3% (chance level 25%) using LDA. Accuracy decreased significantly for task prediction on small gaze data chunks of 5 and 3 seconds, being 45.3% and 38.0% (chance 25%) for the four tasks, and 52.3% and 47.7% (chance 33%) for the three tasks.
在运动图像分析中,通过眼动模式预测观察者的任务
几位作者研究了在几个观看任务中通过眼球运动来预测观察者的任务。这一贡献增加了在运动图像分析过程中发生的眼动任务预测:探索、观察、搜索和跟踪。为此,研究人员记录了30名人类观察者在每个任务下观看一次运动图像序列的注视数据。任务解码采用随机森林、LDA和QDA分类方法;特征是注视或眼跳相关的测量。随机森林对4分钟注视数据样本中观察、搜索、跟踪三个任务的预测准确率最高为83.7%(机会水平为33%)。从包含前30秒观看的注视数据样本中,使用LDA预测所有四个任务的最佳准确率为59.3%(机会水平为25%)。在5秒和3秒的小凝视数据块上,任务预测的准确率显著下降,4个任务的准确率分别为45.3%和38.0%(几率为25%),3个任务的准确率分别为52.3%和47.7%(几率为33%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信