Prem Chand Yadav, Hari Singh Dhillon, Ankit Patel, Anurag Singh
{"title":"不同面部动作跟踪模型和技术的比较分析","authors":"Prem Chand Yadav, Hari Singh Dhillon, Ankit Patel, Anurag Singh","doi":"10.1109/ICETEESES.2016.7581407","DOIUrl":null,"url":null,"abstract":"The tracking of facial activities from video is an important and challenging problem. Now a day, many computer vision techniques have been proposed to characterize the facial activities in the three levels (from local to global). First level is the bottom level, in which the facial feature tracking focuses on detecting and tracking of the prominent local landmarks surrounding facial components (e.g. mouth, eyebrow, etc), in second level the facial action units (AUs) characterize the specific behaviors of these local facial components (e.g. mouth open, eyebrow raiser, etc) and the third level is facial expression level, which represents subjects emotions (e.g. Surprise, Happy, Anger, etc.) and controls the global muscular movement of the whole face. Most of the existing methods focus on one or two levels of facial activities, and track (or recognize) them separately. In this paper, various facial action tracking models and techniques are compared in different conditions such as the performance of Active Facial Tracking for Fatigue Detection, Real Time 3D Face Pose Tracking from an Uncalibrated Camera, Simultaneous facial action tracking and expression recognition using a particle filter and Simultaneous Tracking and Facial Expression Recognition using Multiperson and Multiclass Autoregressive Models.","PeriodicalId":322442,"journal":{"name":"2016 International Conference on Emerging Trends in Electrical Electronics & Sustainable Energy Systems (ICETEESES)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A comparative analysis of different facial action tracking models and techniques\",\"authors\":\"Prem Chand Yadav, Hari Singh Dhillon, Ankit Patel, Anurag Singh\",\"doi\":\"10.1109/ICETEESES.2016.7581407\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The tracking of facial activities from video is an important and challenging problem. Now a day, many computer vision techniques have been proposed to characterize the facial activities in the three levels (from local to global). First level is the bottom level, in which the facial feature tracking focuses on detecting and tracking of the prominent local landmarks surrounding facial components (e.g. mouth, eyebrow, etc), in second level the facial action units (AUs) characterize the specific behaviors of these local facial components (e.g. mouth open, eyebrow raiser, etc) and the third level is facial expression level, which represents subjects emotions (e.g. Surprise, Happy, Anger, etc.) and controls the global muscular movement of the whole face. Most of the existing methods focus on one or two levels of facial activities, and track (or recognize) them separately. In this paper, various facial action tracking models and techniques are compared in different conditions such as the performance of Active Facial Tracking for Fatigue Detection, Real Time 3D Face Pose Tracking from an Uncalibrated Camera, Simultaneous facial action tracking and expression recognition using a particle filter and Simultaneous Tracking and Facial Expression Recognition using Multiperson and Multiclass Autoregressive Models.\",\"PeriodicalId\":322442,\"journal\":{\"name\":\"2016 International Conference on Emerging Trends in Electrical Electronics & Sustainable Energy Systems (ICETEESES)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-03-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 International Conference on Emerging Trends in Electrical Electronics & Sustainable Energy Systems (ICETEESES)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICETEESES.2016.7581407\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Conference on Emerging Trends in Electrical Electronics & Sustainable Energy Systems (ICETEESES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICETEESES.2016.7581407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A comparative analysis of different facial action tracking models and techniques
The tracking of facial activities from video is an important and challenging problem. Now a day, many computer vision techniques have been proposed to characterize the facial activities in the three levels (from local to global). First level is the bottom level, in which the facial feature tracking focuses on detecting and tracking of the prominent local landmarks surrounding facial components (e.g. mouth, eyebrow, etc), in second level the facial action units (AUs) characterize the specific behaviors of these local facial components (e.g. mouth open, eyebrow raiser, etc) and the third level is facial expression level, which represents subjects emotions (e.g. Surprise, Happy, Anger, etc.) and controls the global muscular movement of the whole face. Most of the existing methods focus on one or two levels of facial activities, and track (or recognize) them separately. In this paper, various facial action tracking models and techniques are compared in different conditions such as the performance of Active Facial Tracking for Fatigue Detection, Real Time 3D Face Pose Tracking from an Uncalibrated Camera, Simultaneous facial action tracking and expression recognition using a particle filter and Simultaneous Tracking and Facial Expression Recognition using Multiperson and Multiclass Autoregressive Models.