{"title":"基于迁移学习和生成模型的面部表情情感识别","authors":"Tomoki Kusunose, Xin Kang, Keita Kiuchi, Ryota Nishimura, M. Sasayama, Kazuyuki Matsumoto","doi":"10.1109/ICSAI57119.2022.10005478","DOIUrl":null,"url":null,"abstract":"Facial expression emotion recognition has been a popular research topic, which played an important role in assisting the natural human-machine conversation. The conventional method for emotion estimation from facial expressions is to learn a CNN-based image classification model from scratch, However, learning such model requires a large number of labeled facial expression images, which is still a limited resource until now. To solve this problem, we propose a data augmentation method based on StyleGAN2 to generate artificial expression images with respect to seven emotions and use them as the additional training data. We further train an expression emotion recognition model based on a VGG16 network through transfer learning. In this research, we proposed a method using transfer learning and augmented images of facial expressions using trained VGG16 and StyleGAN2 and conducted experiments to achieve higher recognition accuracy for racial expression emotion recognition. Our experiment based on the CFEE dataset suggested that an emotion recognition accuracy of 75.10% could be obtained through transfer learning and the accuracy could further improved to 82.04% with the augmented expression images.","PeriodicalId":339547,"journal":{"name":"2022 8th International Conference on Systems and Informatics (ICSAI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Facial Expression Emotion Recognition Based on Transfer Learning and Generative Model\",\"authors\":\"Tomoki Kusunose, Xin Kang, Keita Kiuchi, Ryota Nishimura, M. Sasayama, Kazuyuki Matsumoto\",\"doi\":\"10.1109/ICSAI57119.2022.10005478\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Facial expression emotion recognition has been a popular research topic, which played an important role in assisting the natural human-machine conversation. The conventional method for emotion estimation from facial expressions is to learn a CNN-based image classification model from scratch, However, learning such model requires a large number of labeled facial expression images, which is still a limited resource until now. To solve this problem, we propose a data augmentation method based on StyleGAN2 to generate artificial expression images with respect to seven emotions and use them as the additional training data. We further train an expression emotion recognition model based on a VGG16 network through transfer learning. In this research, we proposed a method using transfer learning and augmented images of facial expressions using trained VGG16 and StyleGAN2 and conducted experiments to achieve higher recognition accuracy for racial expression emotion recognition. Our experiment based on the CFEE dataset suggested that an emotion recognition accuracy of 75.10% could be obtained through transfer learning and the accuracy could further improved to 82.04% with the augmented expression images.\",\"PeriodicalId\":339547,\"journal\":{\"name\":\"2022 8th International Conference on Systems and Informatics (ICSAI)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 8th International Conference on Systems and Informatics (ICSAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSAI57119.2022.10005478\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 8th International Conference on Systems and Informatics (ICSAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSAI57119.2022.10005478","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Facial Expression Emotion Recognition Based on Transfer Learning and Generative Model
Facial expression emotion recognition has been a popular research topic, which played an important role in assisting the natural human-machine conversation. The conventional method for emotion estimation from facial expressions is to learn a CNN-based image classification model from scratch, However, learning such model requires a large number of labeled facial expression images, which is still a limited resource until now. To solve this problem, we propose a data augmentation method based on StyleGAN2 to generate artificial expression images with respect to seven emotions and use them as the additional training data. We further train an expression emotion recognition model based on a VGG16 network through transfer learning. In this research, we proposed a method using transfer learning and augmented images of facial expressions using trained VGG16 and StyleGAN2 and conducted experiments to achieve higher recognition accuracy for racial expression emotion recognition. Our experiment based on the CFEE dataset suggested that an emotion recognition accuracy of 75.10% could be obtained through transfer learning and the accuracy could further improved to 82.04% with the augmented expression images.