{"title":"眼注视与瞳孔直径作为虚拟现实情感分类的眼动追踪特征","authors":"L. Zheng, J. Mountstephens, J. Teo","doi":"10.1109/ICOCO53166.2021.9673503","DOIUrl":null,"url":null,"abstract":"The usage of eye-tracking technology is becoming increasingly popular in machine learning applications, particularly in the area of affective computing and emotion recognition. Typically, emotion recognition studies utilize popular physiological signals such as electroencephalography (EEG), while the research on emotion detection that relies solely on eye-tracking data is limited. In this study, an empirical comparison of the accuracy of eye-tracking-based emotion recognition in a virtual reality (VR) environment using eye fixation versus pupil diameter as the classification feature is performed. We classified emotions into four distinct classes according to Russell's four-quadrant Circumplex Model of Affect. 360° videos are presented as emotional stimuli to participants in a VR environment to evoke the user's emotions. Three separate experiments were conducted using Support Vector Machines (SVMs) as the classification algorithm for the two chosen eye features. The results showed that emotion classification using fixation position obtained an accuracy of 75% while pupil diameter obtained an accuracy of 57%. For four-quadrant emotion recognition, eye fixation as a learning feature produces better classification accuracy compared to pupil diameter. Therefore, this empirical study has shown that eye-tracking-based emotion recognition systems would benefit from using features based on eye fixation data rather than pupil size.","PeriodicalId":262412,"journal":{"name":"2021 IEEE International Conference on Computing (ICOCO)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Eye Fixation Versus Pupil Diameter as Eye-Tracking Features for Virtual Reality Emotion Classification\",\"authors\":\"L. Zheng, J. Mountstephens, J. Teo\",\"doi\":\"10.1109/ICOCO53166.2021.9673503\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The usage of eye-tracking technology is becoming increasingly popular in machine learning applications, particularly in the area of affective computing and emotion recognition. Typically, emotion recognition studies utilize popular physiological signals such as electroencephalography (EEG), while the research on emotion detection that relies solely on eye-tracking data is limited. In this study, an empirical comparison of the accuracy of eye-tracking-based emotion recognition in a virtual reality (VR) environment using eye fixation versus pupil diameter as the classification feature is performed. We classified emotions into four distinct classes according to Russell's four-quadrant Circumplex Model of Affect. 360° videos are presented as emotional stimuli to participants in a VR environment to evoke the user's emotions. Three separate experiments were conducted using Support Vector Machines (SVMs) as the classification algorithm for the two chosen eye features. The results showed that emotion classification using fixation position obtained an accuracy of 75% while pupil diameter obtained an accuracy of 57%. For four-quadrant emotion recognition, eye fixation as a learning feature produces better classification accuracy compared to pupil diameter. Therefore, this empirical study has shown that eye-tracking-based emotion recognition systems would benefit from using features based on eye fixation data rather than pupil size.\",\"PeriodicalId\":262412,\"journal\":{\"name\":\"2021 IEEE International Conference on Computing (ICOCO)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Computing (ICOCO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOCO53166.2021.9673503\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Computing (ICOCO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOCO53166.2021.9673503","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Eye Fixation Versus Pupil Diameter as Eye-Tracking Features for Virtual Reality Emotion Classification
The usage of eye-tracking technology is becoming increasingly popular in machine learning applications, particularly in the area of affective computing and emotion recognition. Typically, emotion recognition studies utilize popular physiological signals such as electroencephalography (EEG), while the research on emotion detection that relies solely on eye-tracking data is limited. In this study, an empirical comparison of the accuracy of eye-tracking-based emotion recognition in a virtual reality (VR) environment using eye fixation versus pupil diameter as the classification feature is performed. We classified emotions into four distinct classes according to Russell's four-quadrant Circumplex Model of Affect. 360° videos are presented as emotional stimuli to participants in a VR environment to evoke the user's emotions. Three separate experiments were conducted using Support Vector Machines (SVMs) as the classification algorithm for the two chosen eye features. The results showed that emotion classification using fixation position obtained an accuracy of 75% while pupil diameter obtained an accuracy of 57%. For four-quadrant emotion recognition, eye fixation as a learning feature produces better classification accuracy compared to pupil diameter. Therefore, this empirical study has shown that eye-tracking-based emotion recognition systems would benefit from using features based on eye fixation data rather than pupil size.