Sandhya Kalavacherla, Morgan Davis Mills, Jacqueline J Greene
{"title":"Learning from Machine Learning: Advancing from Static Images to Dynamic Video-Based Quantification of Facial Palsy.","authors":"Sandhya Kalavacherla, Morgan Davis Mills, Jacqueline J Greene","doi":"10.1089/fpsam.2024.0381","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background:</b> An automated method to accurately quantify facial function from videos has been a long-standing challenge in facial palsy (FP) management. <b>Objective:</b> To compare the accuracy of a Python open-source machine learning algorithm (Python-OS) to a standard image-based analysis tool (Emotrics) to track facial movement among patients with FP, as measured by error rates. <b>Methods:</b> Landmarks were generated on patient with FP images using Python-OS and Emotrics and on patient videos using Python-OS. Weighted error rates were calculated and compared between algorithms using analysis of variance tests. <b>Results:</b> Overall major error rates were 50.3%, 54.3%, and 9.2% for the Emotrics image, Python-OS image, and Python-OS video analyses (<i>p</i> < 0.001). Compared to image analyses, Python-OS video analysis had higher accuracy across all facial features (<i>p</i> = 0.03) and FP severities (<i>p</i> < 0.001). Video analysis allowed us to distinguish FP-specific temporal patterns; the linear relationship between right and left oral commissure movements in normal function (<i>R</i> = 0.99) became nonlinear in flaccid (<i>R</i> = 0.75) and synkinetic (<i>R</i> = 0.72) FP. <b>Conclusion:</b> We report high relative accuracy of dynamic FP quantification through Python-OS, improving the clinical utility of AI-aided FP assessment.</p>","PeriodicalId":48487,"journal":{"name":"Facial Plastic Surgery & Aesthetic Medicine","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Facial Plastic Surgery & Aesthetic Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1089/fpsam.2024.0381","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: An automated method to accurately quantify facial function from videos has been a long-standing challenge in facial palsy (FP) management. Objective: To compare the accuracy of a Python open-source machine learning algorithm (Python-OS) to a standard image-based analysis tool (Emotrics) to track facial movement among patients with FP, as measured by error rates. Methods: Landmarks were generated on patient with FP images using Python-OS and Emotrics and on patient videos using Python-OS. Weighted error rates were calculated and compared between algorithms using analysis of variance tests. Results: Overall major error rates were 50.3%, 54.3%, and 9.2% for the Emotrics image, Python-OS image, and Python-OS video analyses (p < 0.001). Compared to image analyses, Python-OS video analysis had higher accuracy across all facial features (p = 0.03) and FP severities (p < 0.001). Video analysis allowed us to distinguish FP-specific temporal patterns; the linear relationship between right and left oral commissure movements in normal function (R = 0.99) became nonlinear in flaccid (R = 0.75) and synkinetic (R = 0.72) FP. Conclusion: We report high relative accuracy of dynamic FP quantification through Python-OS, improving the clinical utility of AI-aided FP assessment.