Learning from Machine Learning: Advancing from Static Images to Dynamic Video-Based Quantification of Facial Palsy.

IF 1.6 3区 医学 Q2 SURGERY
Sandhya Kalavacherla, Morgan Davis Mills, Jacqueline J Greene
{"title":"Learning from Machine Learning: Advancing from Static Images to Dynamic Video-Based Quantification of Facial Palsy.","authors":"Sandhya Kalavacherla, Morgan Davis Mills, Jacqueline J Greene","doi":"10.1089/fpsam.2024.0381","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background:</b> An automated method to accurately quantify facial function from videos has been a long-standing challenge in facial palsy (FP) management. <b>Objective:</b> To compare the accuracy of a Python open-source machine learning algorithm (Python-OS) to a standard image-based analysis tool (Emotrics) to track facial movement among patients with FP, as measured by error rates. <b>Methods:</b> Landmarks were generated on patient with FP images using Python-OS and Emotrics and on patient videos using Python-OS. Weighted error rates were calculated and compared between algorithms using analysis of variance tests. <b>Results:</b> Overall major error rates were 50.3%, 54.3%, and 9.2% for the Emotrics image, Python-OS image, and Python-OS video analyses (<i>p</i> < 0.001). Compared to image analyses, Python-OS video analysis had higher accuracy across all facial features (<i>p</i> = 0.03) and FP severities (<i>p</i> < 0.001). Video analysis allowed us to distinguish FP-specific temporal patterns; the linear relationship between right and left oral commissure movements in normal function (<i>R</i> = 0.99) became nonlinear in flaccid (<i>R</i> = 0.75) and synkinetic (<i>R</i> = 0.72) FP. <b>Conclusion:</b> We report high relative accuracy of dynamic FP quantification through Python-OS, improving the clinical utility of AI-aided FP assessment.</p>","PeriodicalId":48487,"journal":{"name":"Facial Plastic Surgery & Aesthetic Medicine","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Facial Plastic Surgery & Aesthetic Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1089/fpsam.2024.0381","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

Abstract

Background: An automated method to accurately quantify facial function from videos has been a long-standing challenge in facial palsy (FP) management. Objective: To compare the accuracy of a Python open-source machine learning algorithm (Python-OS) to a standard image-based analysis tool (Emotrics) to track facial movement among patients with FP, as measured by error rates. Methods: Landmarks were generated on patient with FP images using Python-OS and Emotrics and on patient videos using Python-OS. Weighted error rates were calculated and compared between algorithms using analysis of variance tests. Results: Overall major error rates were 50.3%, 54.3%, and 9.2% for the Emotrics image, Python-OS image, and Python-OS video analyses (p < 0.001). Compared to image analyses, Python-OS video analysis had higher accuracy across all facial features (p = 0.03) and FP severities (p < 0.001). Video analysis allowed us to distinguish FP-specific temporal patterns; the linear relationship between right and left oral commissure movements in normal function (R = 0.99) became nonlinear in flaccid (R = 0.75) and synkinetic (R = 0.72) FP. Conclusion: We report high relative accuracy of dynamic FP quantification through Python-OS, improving the clinical utility of AI-aided FP assessment.

从机器学习中学习:从静态图像到基于动态视频的面瘫量化。
背景:从视频中准确量化面部功能的自动化方法一直是面瘫(FP)管理的长期挑战。目的:比较Python开源机器学习算法(Python- os)与标准基于图像的分析工具(Emotrics)在跟踪FP患者面部运动方面的准确性,以错误率来衡量。方法:使用Python-OS和Emotrics对患者FP图像和使用Python-OS对患者视频进行地标生成。加权错误率计算和比较算法之间使用方差分析检验。结果:Emotrics图像、Python-OS图像和Python-OS视频分析的总体主要错误率分别为50.3%、54.3%和9.2% (p < 0.001)。与图像分析相比,Python-OS视频分析在所有面部特征(p = 0.03)和FP严重程度(p < 0.001)上具有更高的准确性。视频分析使我们能够区分fp特定的时间模式;正常功能时左右口连接运动呈线性关系(R = 0.99),而弛缓性(R = 0.75)和共动性(R = 0.72)时左右口连接运动呈非线性关系(R = 0.72)。结论:我们报告了通过Python-OS动态FP定量的较高相对准确性,提高了ai辅助FP评估的临床实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.70
自引率
30.00%
发文量
159
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信