{"title":"基于属性分类的面部表情真实性分析","authors":"G. Florio, M. Buemi, D. Acevedo, P. Negri","doi":"10.1049/icp.2021.1467","DOIUrl":null,"url":null,"abstract":"In this work we study different artificial neural network variants to classify instances of facial expressions on video according to its genuineness. This problem is a task not trivial to solve by human beings. The main analysis compares deep feed-forward neural networks with recurrent neural networks. This particular type of network capable of extracting information from a sequence and keep it through time. In that way, a video can be classified using not only its features but also the ones from its predecessors. Since the amount of videos in the dataset is rather scarce, a new metric is proposed to make a more particularized analysis. Results suggest that certain facial features that allows distinguishing a genuine expression and a faked one are too related to the subject that performs them, which suggests that developing an universal classifier (independent of the subject) seems unfeasible. Regarding the comparison between the two types of networks, although the recurrent variants cannot outperform convnets, we can observe that they achieve similar results but with a smaller amount of training epochs. The dataset used in this paper was originated for the Real Versus Fake Expressed Emotion Challenge at the ICCV 2017.","PeriodicalId":431144,"journal":{"name":"11th International Conference of Pattern Recognition Systems (ICPRS 2021)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attribute classification for the analysis of genuineness of facial expressions\",\"authors\":\"G. Florio, M. Buemi, D. Acevedo, P. Negri\",\"doi\":\"10.1049/icp.2021.1467\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work we study different artificial neural network variants to classify instances of facial expressions on video according to its genuineness. This problem is a task not trivial to solve by human beings. The main analysis compares deep feed-forward neural networks with recurrent neural networks. This particular type of network capable of extracting information from a sequence and keep it through time. In that way, a video can be classified using not only its features but also the ones from its predecessors. Since the amount of videos in the dataset is rather scarce, a new metric is proposed to make a more particularized analysis. Results suggest that certain facial features that allows distinguishing a genuine expression and a faked one are too related to the subject that performs them, which suggests that developing an universal classifier (independent of the subject) seems unfeasible. Regarding the comparison between the two types of networks, although the recurrent variants cannot outperform convnets, we can observe that they achieve similar results but with a smaller amount of training epochs. The dataset used in this paper was originated for the Real Versus Fake Expressed Emotion Challenge at the ICCV 2017.\",\"PeriodicalId\":431144,\"journal\":{\"name\":\"11th International Conference of Pattern Recognition Systems (ICPRS 2021)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"11th International Conference of Pattern Recognition Systems (ICPRS 2021)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1049/icp.2021.1467\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"11th International Conference of Pattern Recognition Systems (ICPRS 2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1049/icp.2021.1467","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Attribute classification for the analysis of genuineness of facial expressions
In this work we study different artificial neural network variants to classify instances of facial expressions on video according to its genuineness. This problem is a task not trivial to solve by human beings. The main analysis compares deep feed-forward neural networks with recurrent neural networks. This particular type of network capable of extracting information from a sequence and keep it through time. In that way, a video can be classified using not only its features but also the ones from its predecessors. Since the amount of videos in the dataset is rather scarce, a new metric is proposed to make a more particularized analysis. Results suggest that certain facial features that allows distinguishing a genuine expression and a faked one are too related to the subject that performs them, which suggests that developing an universal classifier (independent of the subject) seems unfeasible. Regarding the comparison between the two types of networks, although the recurrent variants cannot outperform convnets, we can observe that they achieve similar results but with a smaller amount of training epochs. The dataset used in this paper was originated for the Real Versus Fake Expressed Emotion Challenge at the ICCV 2017.