Comparing three algorithms of automated facial expression analysis in autistic children: different sensitivities but consistent proportions.

IF 5.5 1区 医学 Q1 GENETICS & HEREDITY
Liora Manelis-Baram, Tal Barami, Michal Ilan, Gal Meiri, Idan Menashe, Elizabeth Soskin, Carmel Sofer, Ilan Dinstein
{"title":"Comparing three algorithms of automated facial expression analysis in autistic children: different sensitivities but consistent proportions.","authors":"Liora Manelis-Baram, Tal Barami, Michal Ilan, Gal Meiri, Idan Menashe, Elizabeth Soskin, Carmel Sofer, Ilan Dinstein","doi":"10.1186/s13229-025-00685-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Difficulties with non-verbal communication, including atypical use of facial expressions, are a core feature of autism. Quantifying atypical use of facial expressions during naturalistic social interactions in a reliable, objective, and direct manner is difficult, but potentially possible with facial analysis computer vision algorithms that identify facial expressions in video recordings.</p><p><strong>Methods: </strong>We analyzed > 5 million video frames from 100 verbal children, 2-7 years-old (72 with autism and 28 controls), who were recorded during a ~ 45-minute ADOS-2 assessment using modules 2 or 3, where they interacted with a clinician. Three different facial analysis algorithms (iMotions, FaceReader, and Py-Feat) were used to identify the presence of six facial expressions (anger, fear, sadness, surprise, disgust, and happiness) in each video frame. We then compared results across algorithms and across autism and control groups using robust non-parametric statistical tests.</p><p><strong>Results: </strong>There were significant differences in the performance of the three facial analysis algorithms including differences in the proportion of frames identified as containing a face and frames classified as containing each of the six examined facial expressions. Nevertheless, analyses across all three algorithms demonstrated that there were no significant differences in the quantity of any facial expression produced by children with autism and controls. Furthermore, the quantity of facial expressions did not correlate with autism symptom severity as measured by ADOS-2 CSS scores.</p><p><strong>Limitations: </strong>The current findings are limited to verbal children with autism who completed ADOS-2 assessments using modules 2 and 3 and were able to sit in a stable manner while facing a wall-mounted camera. Furthermore, the analyses focused on comparing the quantity of facial expressions across groups rather than their quality, timing, or social context.</p><p><strong>Conclusions: </strong>Commonly used automated facial analysis algorithms exhibit large variability in their output when identifying facial expressions of young children during naturalistic social interactions. Nonetheless, all three algorithms did not identify differences in the quantity of facial expressions across groups, suggesting that atypical production of facial expressions in verbal children with autism is likely related to their quality, timing, and social context rather than their quantity during natural social interaction.</p>","PeriodicalId":18733,"journal":{"name":"Molecular Autism","volume":"16 1","pages":"50"},"PeriodicalIF":5.5000,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Molecular Autism","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13229-025-00685-x","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GENETICS & HEREDITY","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Difficulties with non-verbal communication, including atypical use of facial expressions, are a core feature of autism. Quantifying atypical use of facial expressions during naturalistic social interactions in a reliable, objective, and direct manner is difficult, but potentially possible with facial analysis computer vision algorithms that identify facial expressions in video recordings.

Methods: We analyzed > 5 million video frames from 100 verbal children, 2-7 years-old (72 with autism and 28 controls), who were recorded during a ~ 45-minute ADOS-2 assessment using modules 2 or 3, where they interacted with a clinician. Three different facial analysis algorithms (iMotions, FaceReader, and Py-Feat) were used to identify the presence of six facial expressions (anger, fear, sadness, surprise, disgust, and happiness) in each video frame. We then compared results across algorithms and across autism and control groups using robust non-parametric statistical tests.

Results: There were significant differences in the performance of the three facial analysis algorithms including differences in the proportion of frames identified as containing a face and frames classified as containing each of the six examined facial expressions. Nevertheless, analyses across all three algorithms demonstrated that there were no significant differences in the quantity of any facial expression produced by children with autism and controls. Furthermore, the quantity of facial expressions did not correlate with autism symptom severity as measured by ADOS-2 CSS scores.

Limitations: The current findings are limited to verbal children with autism who completed ADOS-2 assessments using modules 2 and 3 and were able to sit in a stable manner while facing a wall-mounted camera. Furthermore, the analyses focused on comparing the quantity of facial expressions across groups rather than their quality, timing, or social context.

Conclusions: Commonly used automated facial analysis algorithms exhibit large variability in their output when identifying facial expressions of young children during naturalistic social interactions. Nonetheless, all three algorithms did not identify differences in the quantity of facial expressions across groups, suggesting that atypical production of facial expressions in verbal children with autism is likely related to their quality, timing, and social context rather than their quantity during natural social interaction.

自闭症儿童面部表情自动分析的三种算法:灵敏度不同但比例一致。
背景:非语言交流困难,包括非典型面部表情的使用,是自闭症的一个核心特征。以可靠、客观和直接的方式量化自然社会互动中面部表情的非典型使用是困难的,但有可能通过面部分析计算机视觉算法识别视频记录中的面部表情。方法:我们分析了100名2-7岁会说话的儿童(72名患有自闭症,28名对照组)的150万帧视频,这些儿童在使用模块2或3进行约45分钟的ADOS-2评估期间被记录下来,在那里他们与临床医生互动。使用三种不同的面部分析算法(imotion、FaceReader和Py-Feat)来识别每个视频帧中存在的六种面部表情(愤怒、恐惧、悲伤、惊讶、厌恶和快乐)。然后,我们使用稳健的非参数统计测试比较了算法之间以及自闭症和对照组之间的结果。结果:三种面部分析算法的性能存在显著差异,包括被识别为包含人脸的帧和被分类为包含六种被检查的面部表情的帧的比例差异。然而,对所有三种算法的分析表明,自闭症儿童和对照组儿童产生的任何面部表情的数量都没有显著差异。此外,面部表情的数量与用ADOS-2 CSS评分测量的自闭症症状严重程度没有相关性。局限性:目前的研究结果仅限于语言自闭症儿童,他们完成了使用模块2和3的ADOS-2评估,并且能够在面对壁挂摄像头时以稳定的方式坐着。此外,分析的重点是比较不同人群面部表情的数量,而不是它们的质量、时间或社会背景。结论:常用的自动面部分析算法在识别幼儿在自然社会互动中的面部表情时,其输出表现出很大的可变性。尽管如此,所有三种算法都没有识别出不同群体之间面部表情数量的差异,这表明言语自闭症儿童的非典型面部表情的产生可能与他们的质量、时间和社会背景有关,而不是他们在自然社会互动中的数量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Molecular Autism
Molecular Autism GENETICS & HEREDITY-NEUROSCIENCES
CiteScore
12.10
自引率
1.60%
发文量
44
审稿时长
17 weeks
期刊介绍: Molecular Autism is a peer-reviewed, open access journal that publishes high-quality basic, translational and clinical research that has relevance to the etiology, pathobiology, or treatment of autism and related neurodevelopmental conditions. Research that includes integration across levels is encouraged. Molecular Autism publishes empirical studies, reviews, and brief communications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信