{"title":"瞳孔测量和头部到屏幕的距离预测信息可视化任务中的技能获取","authors":"Dereck Toker, Sébastien Lallé, C. Conati","doi":"10.1145/3025171.3025187","DOIUrl":null,"url":null,"abstract":"In this paper we investigate using a variety of behavioral measures collectible with an eye tracker to predict a user's skill acquisition phase while performing various information visualization tasks with bar graphs. Our long term goal is to use this information in real-time to create user-adaptive visualizations that can provide personalized support to facilitate visualization processing based on the user's predicted skill level. We show that leveraging two additional content-independent data sources, namely information on a user's pupil dilation and head distance to the screen, yields a significant improvement for predictive accuracies of skill acquisition compared to predictions made using content-dependent information related to user eye gaze attention patterns, as was done in previous work. We show that including features from both pupil dilation and head distance to the screen improve the ability to predict users' skill acquisition state, beating both the baseline and a model using only content-dependent gaze information.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"49 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":"{\"title\":\"Pupillometry and Head Distance to the Screen to Predict Skill Acquisition During Information Visualization Tasks\",\"authors\":\"Dereck Toker, Sébastien Lallé, C. Conati\",\"doi\":\"10.1145/3025171.3025187\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we investigate using a variety of behavioral measures collectible with an eye tracker to predict a user's skill acquisition phase while performing various information visualization tasks with bar graphs. Our long term goal is to use this information in real-time to create user-adaptive visualizations that can provide personalized support to facilitate visualization processing based on the user's predicted skill level. We show that leveraging two additional content-independent data sources, namely information on a user's pupil dilation and head distance to the screen, yields a significant improvement for predictive accuracies of skill acquisition compared to predictions made using content-dependent information related to user eye gaze attention patterns, as was done in previous work. We show that including features from both pupil dilation and head distance to the screen improve the ability to predict users' skill acquisition state, beating both the baseline and a model using only content-dependent gaze information.\",\"PeriodicalId\":166632,\"journal\":{\"name\":\"Proceedings of the 22nd International Conference on Intelligent User Interfaces\",\"volume\":\"49 2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 22nd International Conference on Intelligent User Interfaces\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3025171.3025187\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3025171.3025187","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pupillometry and Head Distance to the Screen to Predict Skill Acquisition During Information Visualization Tasks
In this paper we investigate using a variety of behavioral measures collectible with an eye tracker to predict a user's skill acquisition phase while performing various information visualization tasks with bar graphs. Our long term goal is to use this information in real-time to create user-adaptive visualizations that can provide personalized support to facilitate visualization processing based on the user's predicted skill level. We show that leveraging two additional content-independent data sources, namely information on a user's pupil dilation and head distance to the screen, yields a significant improvement for predictive accuracies of skill acquisition compared to predictions made using content-dependent information related to user eye gaze attention patterns, as was done in previous work. We show that including features from both pupil dilation and head distance to the screen improve the ability to predict users' skill acquisition state, beating both the baseline and a model using only content-dependent gaze information.