眼注视与瞳孔直径作为虚拟现实情感分类的眼动追踪特征

L. Zheng, J. Mountstephens, J. Teo
{"title":"眼注视与瞳孔直径作为虚拟现实情感分类的眼动追踪特征","authors":"L. Zheng, J. Mountstephens, J. Teo","doi":"10.1109/ICOCO53166.2021.9673503","DOIUrl":null,"url":null,"abstract":"The usage of eye-tracking technology is becoming increasingly popular in machine learning applications, particularly in the area of affective computing and emotion recognition. Typically, emotion recognition studies utilize popular physiological signals such as electroencephalography (EEG), while the research on emotion detection that relies solely on eye-tracking data is limited. In this study, an empirical comparison of the accuracy of eye-tracking-based emotion recognition in a virtual reality (VR) environment using eye fixation versus pupil diameter as the classification feature is performed. We classified emotions into four distinct classes according to Russell's four-quadrant Circumplex Model of Affect. 360° videos are presented as emotional stimuli to participants in a VR environment to evoke the user's emotions. Three separate experiments were conducted using Support Vector Machines (SVMs) as the classification algorithm for the two chosen eye features. The results showed that emotion classification using fixation position obtained an accuracy of 75% while pupil diameter obtained an accuracy of 57%. For four-quadrant emotion recognition, eye fixation as a learning feature produces better classification accuracy compared to pupil diameter. Therefore, this empirical study has shown that eye-tracking-based emotion recognition systems would benefit from using features based on eye fixation data rather than pupil size.","PeriodicalId":262412,"journal":{"name":"2021 IEEE International Conference on Computing (ICOCO)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Eye Fixation Versus Pupil Diameter as Eye-Tracking Features for Virtual Reality Emotion Classification\",\"authors\":\"L. Zheng, J. Mountstephens, J. Teo\",\"doi\":\"10.1109/ICOCO53166.2021.9673503\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The usage of eye-tracking technology is becoming increasingly popular in machine learning applications, particularly in the area of affective computing and emotion recognition. Typically, emotion recognition studies utilize popular physiological signals such as electroencephalography (EEG), while the research on emotion detection that relies solely on eye-tracking data is limited. In this study, an empirical comparison of the accuracy of eye-tracking-based emotion recognition in a virtual reality (VR) environment using eye fixation versus pupil diameter as the classification feature is performed. We classified emotions into four distinct classes according to Russell's four-quadrant Circumplex Model of Affect. 360° videos are presented as emotional stimuli to participants in a VR environment to evoke the user's emotions. Three separate experiments were conducted using Support Vector Machines (SVMs) as the classification algorithm for the two chosen eye features. The results showed that emotion classification using fixation position obtained an accuracy of 75% while pupil diameter obtained an accuracy of 57%. For four-quadrant emotion recognition, eye fixation as a learning feature produces better classification accuracy compared to pupil diameter. Therefore, this empirical study has shown that eye-tracking-based emotion recognition systems would benefit from using features based on eye fixation data rather than pupil size.\",\"PeriodicalId\":262412,\"journal\":{\"name\":\"2021 IEEE International Conference on Computing (ICOCO)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Computing (ICOCO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOCO53166.2021.9673503\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Computing (ICOCO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOCO53166.2021.9673503","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

眼动追踪技术在机器学习应用中越来越受欢迎,特别是在情感计算和情感识别领域。通常,情绪识别研究利用脑电图(EEG)等流行的生理信号,而仅依靠眼动追踪数据的情绪检测研究是有限的。在本研究中,实证比较了在虚拟现实(VR)环境下,以眼球注视和瞳孔直径为分类特征的基于眼动追踪的情绪识别的准确性。我们根据罗素的四象限圆周影响模型将情绪分为四种不同的类别。360°视频作为情感刺激呈现给VR环境中的参与者,以唤起用户的情感。使用支持向量机(svm)作为所选两种眼睛特征的分类算法,进行了三个独立的实验。结果表明,使用注视位置进行情绪分类的准确率为75%,而瞳孔直径的准确率为57%。对于四象限情感识别,与瞳孔直径相比,眼睛固定作为学习特征产生更好的分类准确性。因此,本实证研究表明,基于眼动追踪的情绪识别系统将受益于基于眼球注视数据的特征,而不是基于瞳孔大小的特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Eye Fixation Versus Pupil Diameter as Eye-Tracking Features for Virtual Reality Emotion Classification
The usage of eye-tracking technology is becoming increasingly popular in machine learning applications, particularly in the area of affective computing and emotion recognition. Typically, emotion recognition studies utilize popular physiological signals such as electroencephalography (EEG), while the research on emotion detection that relies solely on eye-tracking data is limited. In this study, an empirical comparison of the accuracy of eye-tracking-based emotion recognition in a virtual reality (VR) environment using eye fixation versus pupil diameter as the classification feature is performed. We classified emotions into four distinct classes according to Russell's four-quadrant Circumplex Model of Affect. 360° videos are presented as emotional stimuli to participants in a VR environment to evoke the user's emotions. Three separate experiments were conducted using Support Vector Machines (SVMs) as the classification algorithm for the two chosen eye features. The results showed that emotion classification using fixation position obtained an accuracy of 75% while pupil diameter obtained an accuracy of 57%. For four-quadrant emotion recognition, eye fixation as a learning feature produces better classification accuracy compared to pupil diameter. Therefore, this empirical study has shown that eye-tracking-based emotion recognition systems would benefit from using features based on eye fixation data rather than pupil size.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信