Automated facial coding: validation of basic emotions and FACS AUs in FaceReader

IF 1.6 4区 医学 Q2 ECONOMICS
Peter Lewinski, Tim M. den Uyl, Crystal Butler
{"title":"Automated facial coding: validation of basic emotions and FACS AUs in FaceReader","authors":"Peter Lewinski, Tim M. den Uyl, Crystal Butler","doi":"10.1037/NPE0000028","DOIUrl":null,"url":null,"abstract":"In this study, we validated automated facial coding (AFC) software—FaceReader (Noldus, 2014)—on 2 publicly available and objective datasets of human expressions of basic emotions. We present the matching scores (accuracy) for recognition of facial expressions and the Facial Action Coding System (FACS) index of agreement. In 2005, matching scores of 89% were reported for FaceReader. However, previous research used a version of FaceReader that implemented older algorithms (version 1.0) and did not contain FACS classifiers. In this study, we tested the newest version (6.0). FaceReader recognized 88% of the target emotional labels in the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) and Amsterdam Dynamic Facial Expression Set (ADFES). The software reached a FACS index of agreement of 0.67 on average in both datasets. The results of this validation test are meaningful only in relation to human performance rates for both basic emotion recognition and FACS coding. The human emotions recognition for the 2 datasets was 85%, therefore FaceReader is as good at recognizing emotions as humans. To receive FACS certification, a human coder must reach an agreement of 0.70 with the master coding of the final test. Even though FaceReader did not attain this score, action units (AUs) 1, 2, 4, 5, 6, 9, 12, 15, and 25 might be used with high accuracy. We believe that FaceReader has proven to be a reliable indicator of basic emotions in the past decade and has a potential to become similarly robust with FACS.","PeriodicalId":45695,"journal":{"name":"Journal of Neuroscience Psychology and Economics","volume":"44 1","pages":"227-236"},"PeriodicalIF":1.6000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"267","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Neuroscience Psychology and Economics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1037/NPE0000028","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 267

Abstract

In this study, we validated automated facial coding (AFC) software—FaceReader (Noldus, 2014)—on 2 publicly available and objective datasets of human expressions of basic emotions. We present the matching scores (accuracy) for recognition of facial expressions and the Facial Action Coding System (FACS) index of agreement. In 2005, matching scores of 89% were reported for FaceReader. However, previous research used a version of FaceReader that implemented older algorithms (version 1.0) and did not contain FACS classifiers. In this study, we tested the newest version (6.0). FaceReader recognized 88% of the target emotional labels in the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) and Amsterdam Dynamic Facial Expression Set (ADFES). The software reached a FACS index of agreement of 0.67 on average in both datasets. The results of this validation test are meaningful only in relation to human performance rates for both basic emotion recognition and FACS coding. The human emotions recognition for the 2 datasets was 85%, therefore FaceReader is as good at recognizing emotions as humans. To receive FACS certification, a human coder must reach an agreement of 0.70 with the master coding of the final test. Even though FaceReader did not attain this score, action units (AUs) 1, 2, 4, 5, 6, 9, 12, 15, and 25 might be used with high accuracy. We believe that FaceReader has proven to be a reliable indicator of basic emotions in the past decade and has a potential to become similarly robust with FACS.
自动面部编码:在FaceReader中验证基本情绪和FACS AUs
在这项研究中,我们在2个公开的、客观的人类基本情绪表达数据集上验证了自动面部编码(AFC)软件facereader (Noldus, 2014)。我们提出了面部表情识别的匹配分数(准确性)和面部动作编码系统(FACS)的一致性指数。2005年,FaceReader的匹配得分为89%。然而,之前的研究使用了一个版本的FaceReader,该版本实现了较旧的算法(版本1.0),并且不包含FACS分类器。在本研究中,我们测试了最新版本(6.0)。FaceReader在华沙情绪面部表情图片集(WSEFEP)和阿姆斯特丹动态面部表情集(ADFES)中识别了88%的目标情绪标签。该软件在两个数据集上的FACS一致性指数平均为0.67。这个验证测试的结果只有在基本情绪识别和FACS编码的人类表现率方面才有意义。这两个数据集的人类情绪识别率为85%,因此FaceReader在识别情绪方面和人类一样好。要获得FACS认证,人工编码员必须与最终测试的主编码达成0.70的协议。尽管FaceReader没有达到这个分数,但动作单元(au) 1、2、4、5、6、9、12、15和25可能会有很高的准确性。我们相信,在过去的十年中,FaceReader已经被证明是一个可靠的基本情绪指标,并且有可能在FACS上变得同样强大。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.50
自引率
28.60%
发文量
18
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信