RavenGaze:一个利用眼动仪进行心理实验的注视估计数据集

Tao Xu, Borimandafu Wu, Yuqiong Bai, Yun Zhou
{"title":"RavenGaze:一个利用眼动仪进行心理实验的注视估计数据集","authors":"Tao Xu, Borimandafu Wu, Yuqiong Bai, Yun Zhou","doi":"10.1109/FG57933.2023.10042793","DOIUrl":null,"url":null,"abstract":"One major challenge in appearance-based gaze estimation is the lack of high-quality labeled data. Establishing databases or datasets is a way to obtain accurate gaze data and test methods or tools. However, the methods of collecting data in existing databases are designed on artificial chasing target tasks or unintentional free-looking tasks, which are not natural and real eye interactions and cannot reflect the inner cognitive processes of humans. To fill this gap, we propose the first gaze estimation dataset collected from an actual psychological experiment by the eye tracker, called the RavenGaze dataset. We design an experiment employing Raven's Matrices as visual stimuli and collecting gaze data, facial videos as well as screen content videos simultaneously. Thirty-four participants were recruited. The results show that the existing algorithms perform well on our RavenGaze dataset in the 3D and 2D gaze estimation task, and demonstrate good generalization ability according to cross-dataset evaluation task. RavenGaze and the establishment of the benchmark lay the foundation for other researchers to do further in-depth research and test their methods or tools. Our dataset is available at https://intelligentinteractivelab.github.io/datasets/RavenGaze/index.html.","PeriodicalId":318766,"journal":{"name":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RavenGaze: A Dataset for Gaze Estimation Leveraging Psychological Experiment Through Eye Tracker\",\"authors\":\"Tao Xu, Borimandafu Wu, Yuqiong Bai, Yun Zhou\",\"doi\":\"10.1109/FG57933.2023.10042793\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One major challenge in appearance-based gaze estimation is the lack of high-quality labeled data. Establishing databases or datasets is a way to obtain accurate gaze data and test methods or tools. However, the methods of collecting data in existing databases are designed on artificial chasing target tasks or unintentional free-looking tasks, which are not natural and real eye interactions and cannot reflect the inner cognitive processes of humans. To fill this gap, we propose the first gaze estimation dataset collected from an actual psychological experiment by the eye tracker, called the RavenGaze dataset. We design an experiment employing Raven's Matrices as visual stimuli and collecting gaze data, facial videos as well as screen content videos simultaneously. Thirty-four participants were recruited. The results show that the existing algorithms perform well on our RavenGaze dataset in the 3D and 2D gaze estimation task, and demonstrate good generalization ability according to cross-dataset evaluation task. RavenGaze and the establishment of the benchmark lay the foundation for other researchers to do further in-depth research and test their methods or tools. Our dataset is available at https://intelligentinteractivelab.github.io/datasets/RavenGaze/index.html.\",\"PeriodicalId\":318766,\"journal\":{\"name\":\"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FG57933.2023.10042793\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FG57933.2023.10042793","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于外观的注视估计的一个主要挑战是缺乏高质量的标记数据。建立数据库或数据集是获得准确注视数据和测试方法或工具的一种方式。然而,现有数据库的数据采集方法都是基于人为追逐目标任务或无意的自由注视任务设计的,这不是自然真实的眼交互,不能反映人的内在认知过程。为了填补这一空白,我们提出了从眼动仪的实际心理实验中收集的第一个凝视估计数据集,称为RavenGaze数据集。我们设计了一个实验,采用瑞文矩阵作为视觉刺激,同时收集凝视数据、面部视频和屏幕内容视频。招募了34名参与者。结果表明,现有算法在RavenGaze数据集的三维和二维凝视估计任务中表现良好,在跨数据集评估任务中表现出良好的泛化能力。RavenGaze和基准的建立为其他研究人员进一步深入研究和测试他们的方法或工具奠定了基础。我们的数据集可以在https://intelligentinteractivelab.github.io/datasets/RavenGaze/index.html上找到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
RavenGaze: A Dataset for Gaze Estimation Leveraging Psychological Experiment Through Eye Tracker
One major challenge in appearance-based gaze estimation is the lack of high-quality labeled data. Establishing databases or datasets is a way to obtain accurate gaze data and test methods or tools. However, the methods of collecting data in existing databases are designed on artificial chasing target tasks or unintentional free-looking tasks, which are not natural and real eye interactions and cannot reflect the inner cognitive processes of humans. To fill this gap, we propose the first gaze estimation dataset collected from an actual psychological experiment by the eye tracker, called the RavenGaze dataset. We design an experiment employing Raven's Matrices as visual stimuli and collecting gaze data, facial videos as well as screen content videos simultaneously. Thirty-four participants were recruited. The results show that the existing algorithms perform well on our RavenGaze dataset in the 3D and 2D gaze estimation task, and demonstrate good generalization ability according to cross-dataset evaluation task. RavenGaze and the establishment of the benchmark lay the foundation for other researchers to do further in-depth research and test their methods or tools. Our dataset is available at https://intelligentinteractivelab.github.io/datasets/RavenGaze/index.html.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信