Metwally Rashad, Doaa M. Alebiary, Mohammed Aldawsari, Ahmed A. El-Sawy, Ahmed H. AbuEl-Atta
{"title":"CCNN-SVM:基于自定义卷积神经网络和 SVM 的自动情感识别模型","authors":"Metwally Rashad, Doaa M. Alebiary, Mohammed Aldawsari, Ahmed A. El-Sawy, Ahmed H. AbuEl-Atta","doi":"10.3390/info15070384","DOIUrl":null,"url":null,"abstract":"The expressions on human faces reveal the emotions we are experiencing internally. Emotion recognition based on facial expression is one of the subfields of social signal processing. It has several applications in different areas, specifically in the interaction between humans and computers. This study presents a simple CCNN-SVM automated model as a viable approach for FER. The model combines a Convolutional Neural Network for feature extraction, certain image preprocessing techniques, and Support Vector Machine (SVM) for classification. Firstly, the input image is preprocessed using face detection, histogram equalization, gamma correction, and resizing techniques. Secondly, the images go through custom single Deep Convolutional Neural Networks (CCNN) to extract deep features. Finally, SVM uses the generated features to perform the classification. The suggested model was trained and tested on four datasets, CK+, JAFFE, KDEF, and FER. These datasets consist of seven primary emotional categories, which encompass anger, disgust, fear, happiness, sadness, surprise, and neutrality for CK+, and include contempt for JAFFE. The model put forward demonstrates commendable performance in comparison to existing facial expression recognition techniques. It achieves an impressive accuracy of 99.3% on the CK+ dataset, 98.4% on the JAFFE dataset, 87.18% on the KDEF dataset, and 88.7% on the FER.","PeriodicalId":510156,"journal":{"name":"Information","volume":"4 11","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CCNN-SVM: Automated Model for Emotion Recognition Based on Custom Convolutional Neural Networks with SVM\",\"authors\":\"Metwally Rashad, Doaa M. Alebiary, Mohammed Aldawsari, Ahmed A. El-Sawy, Ahmed H. AbuEl-Atta\",\"doi\":\"10.3390/info15070384\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The expressions on human faces reveal the emotions we are experiencing internally. Emotion recognition based on facial expression is one of the subfields of social signal processing. It has several applications in different areas, specifically in the interaction between humans and computers. This study presents a simple CCNN-SVM automated model as a viable approach for FER. The model combines a Convolutional Neural Network for feature extraction, certain image preprocessing techniques, and Support Vector Machine (SVM) for classification. Firstly, the input image is preprocessed using face detection, histogram equalization, gamma correction, and resizing techniques. Secondly, the images go through custom single Deep Convolutional Neural Networks (CCNN) to extract deep features. Finally, SVM uses the generated features to perform the classification. The suggested model was trained and tested on four datasets, CK+, JAFFE, KDEF, and FER. These datasets consist of seven primary emotional categories, which encompass anger, disgust, fear, happiness, sadness, surprise, and neutrality for CK+, and include contempt for JAFFE. The model put forward demonstrates commendable performance in comparison to existing facial expression recognition techniques. It achieves an impressive accuracy of 99.3% on the CK+ dataset, 98.4% on the JAFFE dataset, 87.18% on the KDEF dataset, and 88.7% on the FER.\",\"PeriodicalId\":510156,\"journal\":{\"name\":\"Information\",\"volume\":\"4 11\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/info15070384\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info15070384","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
人脸上的表情揭示了我们内心正在经历的情绪。基于面部表情的情绪识别是社会信号处理的子领域之一。它在不同领域都有一些应用,特别是在人与计算机的交互中。本研究提出了一个简单的 CCNN-SVM 自动模型,作为 FER 的可行方法。该模型结合了用于特征提取的卷积神经网络、某些图像预处理技术和用于分类的支持向量机(SVM)。首先,使用人脸检测、直方图均衡化、伽玛校正和大小调整技术对输入图像进行预处理。其次,图像经过定制的单一深度卷积神经网络(CCNN)来提取深度特征。最后,SVM 使用生成的特征进行分类。建议的模型在 CK+、JAFFE、KDEF 和 FER 四个数据集上进行了训练和测试。这些数据集包含七个主要情感类别,CK+ 包含愤怒、厌恶、恐惧、快乐、悲伤、惊讶和中立,JAFFE 包含蔑视。与现有的面部表情识别技术相比,所提出的模型表现出了值得称赞的性能。它在 CK+ 数据集上达到了令人印象深刻的 99.3%,在 JAFFE 数据集上达到了 98.4%,在 KDEF 数据集上达到了 87.18%,在 FER 数据集上达到了 88.7%。
CCNN-SVM: Automated Model for Emotion Recognition Based on Custom Convolutional Neural Networks with SVM
The expressions on human faces reveal the emotions we are experiencing internally. Emotion recognition based on facial expression is one of the subfields of social signal processing. It has several applications in different areas, specifically in the interaction between humans and computers. This study presents a simple CCNN-SVM automated model as a viable approach for FER. The model combines a Convolutional Neural Network for feature extraction, certain image preprocessing techniques, and Support Vector Machine (SVM) for classification. Firstly, the input image is preprocessed using face detection, histogram equalization, gamma correction, and resizing techniques. Secondly, the images go through custom single Deep Convolutional Neural Networks (CCNN) to extract deep features. Finally, SVM uses the generated features to perform the classification. The suggested model was trained and tested on four datasets, CK+, JAFFE, KDEF, and FER. These datasets consist of seven primary emotional categories, which encompass anger, disgust, fear, happiness, sadness, surprise, and neutrality for CK+, and include contempt for JAFFE. The model put forward demonstrates commendable performance in comparison to existing facial expression recognition techniques. It achieves an impressive accuracy of 99.3% on the CK+ dataset, 98.4% on the JAFFE dataset, 87.18% on the KDEF dataset, and 88.7% on the FER.