Beyond FACS: Data-driven Facial Expression Dictionaries, with Application to Predicting Autism.

Evangelos Sariyanidi, Lisa Yankowitz, Robert T Schultz, John D Herrington, Birkan Tunc, Jeffrey Cohn
{"title":"Beyond FACS: Data-driven Facial Expression Dictionaries, with Application to Predicting Autism.","authors":"Evangelos Sariyanidi, Lisa Yankowitz, Robert T Schultz, John D Herrington, Birkan Tunc, Jeffrey Cohn","doi":"10.1109/fg61629.2025.11099288","DOIUrl":null,"url":null,"abstract":"<p><p>The Facial Action Coding System (FACS) has been used by numerous studies to investigate the links between facial behavior and mental health. The laborious and costly process of FACS coding has motivated the development of machine learning frameworks for Action Unit (AU) detection. Despite intense efforts spanning three decades, the detection accuracy for many AUs is considered to be below the threshold needed for behavioral research. Also, many AUs are excluded altogether, making it impossible to fulfill the ultimate goal of FACS-the representation of <i>any</i> facial expression in its entirety. This paper considers an alternative approach. Instead of creating automated tools that mimic FACS experts, we propose to use a new coding system that mimics the key properties of FACS. Specifically, we construct a data-driven coding system called the Facial Basis, which contains units that correspond to localized and interpretable 3D facial movements, and overcomes three structural limitations of automated FACS coding. First, the proposed method is completely unsupervised, bypassing costly, laborious and variable manual annotation. Second, Facial Basis reconstructs all observable movement, rather than relying on a limited repertoire of recognizable movements (as in automated FACS). Finally, the Facial Basis units are additive, whereas AUs may fail detection when they appear in a non-additive combination. The proposed method outperforms the most frequently used AU detector in predicting autism diagnosis from in-person and remote conversations, highlighting the importance of encoding facial behavior comprehensively. To our knowledge, Facial Basis is the first alternative to FACS for deconstructing facial expressions in videos into localized movements. We provide an open source implementation of the method at github.com/sariyanidi/FacialBasis.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":"2025 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12369895/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/fg61629.2025.11099288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/6 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The Facial Action Coding System (FACS) has been used by numerous studies to investigate the links between facial behavior and mental health. The laborious and costly process of FACS coding has motivated the development of machine learning frameworks for Action Unit (AU) detection. Despite intense efforts spanning three decades, the detection accuracy for many AUs is considered to be below the threshold needed for behavioral research. Also, many AUs are excluded altogether, making it impossible to fulfill the ultimate goal of FACS-the representation of any facial expression in its entirety. This paper considers an alternative approach. Instead of creating automated tools that mimic FACS experts, we propose to use a new coding system that mimics the key properties of FACS. Specifically, we construct a data-driven coding system called the Facial Basis, which contains units that correspond to localized and interpretable 3D facial movements, and overcomes three structural limitations of automated FACS coding. First, the proposed method is completely unsupervised, bypassing costly, laborious and variable manual annotation. Second, Facial Basis reconstructs all observable movement, rather than relying on a limited repertoire of recognizable movements (as in automated FACS). Finally, the Facial Basis units are additive, whereas AUs may fail detection when they appear in a non-additive combination. The proposed method outperforms the most frequently used AU detector in predicting autism diagnosis from in-person and remote conversations, highlighting the importance of encoding facial behavior comprehensively. To our knowledge, Facial Basis is the first alternative to FACS for deconstructing facial expressions in videos into localized movements. We provide an open source implementation of the method at github.com/sariyanidi/FacialBasis.

超越FACS:数据驱动的面部表情词典,用于预测自闭症。
面部动作编码系统(FACS)已被许多研究用于调查面部行为与心理健康之间的联系。费力和昂贵的FACS编码过程推动了行动单元(AU)检测机器学习框架的发展。尽管经过了三十年的努力,许多AUs的检测精度被认为低于行为研究所需的阈值。此外,许多人工智能被完全排除在外,这使得facs的最终目标——任何面部表情的完整表示——无法实现。本文考虑了另一种方法。我们建议使用模仿FACS关键属性的新编码系统,而不是创建模仿FACS专家的自动化工具。具体来说,我们构建了一个数据驱动的编码系统,称为面部基础,其中包含对应于本地化和可解释的3D面部运动的单元,并克服了自动化FACS编码的三个结构限制。首先,该方法是完全无监督的,绕过了昂贵、费力和可变的人工注释。其次,面部基础重建所有可观察到的运动,而不是依赖于有限的可识别运动(如自动FACS)。最后,面部基础单元是相加的,而当它们出现在非相加的组合中时,au可能无法被检测到。该方法在预测面对面和远程对话的自闭症诊断方面优于最常用的AU检测器,突出了全面编码面部行为的重要性。据我们所知,面部基础是FACS的第一个替代方案,用于将视频中的面部表情解构为局部运动。我们在github.com/sariyanidi/FacialBasis上提供了该方法的开源实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信