Pupillometry and Head Distance to the Screen to Predict Skill Acquisition During Information Visualization Tasks

Dereck Toker, Sébastien Lallé, C. Conati
{"title":"Pupillometry and Head Distance to the Screen to Predict Skill Acquisition During Information Visualization Tasks","authors":"Dereck Toker, Sébastien Lallé, C. Conati","doi":"10.1145/3025171.3025187","DOIUrl":null,"url":null,"abstract":"In this paper we investigate using a variety of behavioral measures collectible with an eye tracker to predict a user's skill acquisition phase while performing various information visualization tasks with bar graphs. Our long term goal is to use this information in real-time to create user-adaptive visualizations that can provide personalized support to facilitate visualization processing based on the user's predicted skill level. We show that leveraging two additional content-independent data sources, namely information on a user's pupil dilation and head distance to the screen, yields a significant improvement for predictive accuracies of skill acquisition compared to predictions made using content-dependent information related to user eye gaze attention patterns, as was done in previous work. We show that including features from both pupil dilation and head distance to the screen improve the ability to predict users' skill acquisition state, beating both the baseline and a model using only content-dependent gaze information.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"49 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3025171.3025187","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23

Abstract

In this paper we investigate using a variety of behavioral measures collectible with an eye tracker to predict a user's skill acquisition phase while performing various information visualization tasks with bar graphs. Our long term goal is to use this information in real-time to create user-adaptive visualizations that can provide personalized support to facilitate visualization processing based on the user's predicted skill level. We show that leveraging two additional content-independent data sources, namely information on a user's pupil dilation and head distance to the screen, yields a significant improvement for predictive accuracies of skill acquisition compared to predictions made using content-dependent information related to user eye gaze attention patterns, as was done in previous work. We show that including features from both pupil dilation and head distance to the screen improve the ability to predict users' skill acquisition state, beating both the baseline and a model using only content-dependent gaze information.
瞳孔测量和头部到屏幕的距离预测信息可视化任务中的技能获取
在本文中,我们研究了在使用柱状图执行各种信息可视化任务时,使用眼动仪收集的各种行为测量来预测用户的技能习得阶段。我们的长期目标是实时使用这些信息来创建用户自适应的可视化,这些可视化可以提供个性化的支持,以促进基于用户预测技能水平的可视化处理。我们表明,利用另外两个与内容无关的数据源,即用户瞳孔扩张和头部到屏幕的距离的信息,与使用与用户眼睛凝视注意力模式相关的内容相关信息进行预测相比,显著提高了技能习得的预测准确性,正如之前的工作所做的那样。我们表明,包括瞳孔扩张和头部到屏幕的距离的特征可以提高预测用户技能习得状态的能力,优于仅使用内容相关凝视信息的基线和模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信