Guglielmo Menchetti, Zhanli Chen, Diana J. Wilkie, R. Ansari, Y. Yardimci, A. Enis Cetin
{"title":"基于两阶段深度学习的面部视频疼痛检测","authors":"Guglielmo Menchetti, Zhanli Chen, Diana J. Wilkie, R. Ansari, Y. Yardimci, A. Enis Cetin","doi":"10.1109/GlobalSIP45357.2019.8969274","DOIUrl":null,"url":null,"abstract":"A new method to objectively measure pain using computer vision and machine learning technologies is presented. Our method seeks to capture facial expressions of pain to detect pain, especially when a patients cannot communicate pain verbally. This approach relies on using Facial muscle-based Action Units (AUs), defined by the Facial Action Coding System (FACS), that are associated with pain. It is impractical to use human FACS coding experts in clinical settings to perform this task as it is too labor-intensive and recent research has sought computer-based solutions to the problem. An effective automated system for performing the task is proposed here in which we develop an end-to-end deep learning-based Automated Facial Expression Recognition (AFER) that jointly detects the complete set of pain-related AUs. The facial video clip is processed frame by frame to estimate a vector of AU likelihood values for each frame using a deep convolutional neural network. The AU vectors are concatenated to form a table of AU values for a given video clip. Our results show significantly improved performance compared with those obtained with other known methods.","PeriodicalId":221378,"journal":{"name":"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Pain Detection from Facial Videos Using Two-Stage Deep Learning\",\"authors\":\"Guglielmo Menchetti, Zhanli Chen, Diana J. Wilkie, R. Ansari, Y. Yardimci, A. Enis Cetin\",\"doi\":\"10.1109/GlobalSIP45357.2019.8969274\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A new method to objectively measure pain using computer vision and machine learning technologies is presented. Our method seeks to capture facial expressions of pain to detect pain, especially when a patients cannot communicate pain verbally. This approach relies on using Facial muscle-based Action Units (AUs), defined by the Facial Action Coding System (FACS), that are associated with pain. It is impractical to use human FACS coding experts in clinical settings to perform this task as it is too labor-intensive and recent research has sought computer-based solutions to the problem. An effective automated system for performing the task is proposed here in which we develop an end-to-end deep learning-based Automated Facial Expression Recognition (AFER) that jointly detects the complete set of pain-related AUs. The facial video clip is processed frame by frame to estimate a vector of AU likelihood values for each frame using a deep convolutional neural network. The AU vectors are concatenated to form a table of AU values for a given video clip. Our results show significantly improved performance compared with those obtained with other known methods.\",\"PeriodicalId\":221378,\"journal\":{\"name\":\"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GlobalSIP45357.2019.8969274\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GlobalSIP45357.2019.8969274","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pain Detection from Facial Videos Using Two-Stage Deep Learning
A new method to objectively measure pain using computer vision and machine learning technologies is presented. Our method seeks to capture facial expressions of pain to detect pain, especially when a patients cannot communicate pain verbally. This approach relies on using Facial muscle-based Action Units (AUs), defined by the Facial Action Coding System (FACS), that are associated with pain. It is impractical to use human FACS coding experts in clinical settings to perform this task as it is too labor-intensive and recent research has sought computer-based solutions to the problem. An effective automated system for performing the task is proposed here in which we develop an end-to-end deep learning-based Automated Facial Expression Recognition (AFER) that jointly detects the complete set of pain-related AUs. The facial video clip is processed frame by frame to estimate a vector of AU likelihood values for each frame using a deep convolutional neural network. The AU vectors are concatenated to form a table of AU values for a given video clip. Our results show significantly improved performance compared with those obtained with other known methods.