Alexandre Bruckert, Lucie Lévêque, Matthieu Perreira da Silva, P. Le Callet
{"title":"A Dataset of Gaze and Mouse Patterns in the Context of Facial Expression Recognition","authors":"Alexandre Bruckert, Lucie Lévêque, Matthieu Perreira da Silva, P. Le Callet","doi":"10.1145/3573381.3596153","DOIUrl":null,"url":null,"abstract":"Facial expression recognition is an important and challenging task for both the computer vision and affective computing communities, and even more specifically in the context of multimedia applications, where audience understanding is of particular interest. Recent data-oriented approaches have created the need for large-scale annotated datasets. However, most existing datasets present some weaknesses, because of the collecting methods used. In order to further highlight these issues, we investigate in this work how human visual attention is deployed when performing a facial expression recognition task. To do so, we carried out several complementary experiments, using the eye-tracking technology, as well as the BubbleView metaphor, both under laboratory and crowdsourcing settings. We show significant variations in gaze patterns depending on the emotion represented, but also on the difficulty of the task, i.e., whether the emotion is correctly recognised or not. Moreover, we use these results to propose recommendations on the ways to collect label data for facial expression recognition datasets.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3573381.3596153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Facial expression recognition is an important and challenging task for both the computer vision and affective computing communities, and even more specifically in the context of multimedia applications, where audience understanding is of particular interest. Recent data-oriented approaches have created the need for large-scale annotated datasets. However, most existing datasets present some weaknesses, because of the collecting methods used. In order to further highlight these issues, we investigate in this work how human visual attention is deployed when performing a facial expression recognition task. To do so, we carried out several complementary experiments, using the eye-tracking technology, as well as the BubbleView metaphor, both under laboratory and crowdsourcing settings. We show significant variations in gaze patterns depending on the emotion represented, but also on the difficulty of the task, i.e., whether the emotion is correctly recognised or not. Moreover, we use these results to propose recommendations on the ways to collect label data for facial expression recognition datasets.