{"title":"Emotion Recognition using Deep Stacked Autoencoder with Softmax Classifier","authors":"M. Mohana, P. Subashini","doi":"10.1109/ICAIS56108.2023.10073937","DOIUrl":null,"url":null,"abstract":"Deep learning and computer vision research are still quite active in the field of facial emotion recognition (FER). It has been widely applied in several research areas but not limited to human-robot interaction, human psychology interaction detection, and learners’ emotion identification. In recent decades, facial expression recognition using deep learning has proven to be effective. This performance has been achieved by a good degree of self-learn kernels in the convolution layer which retains spatial information of images with higher accuracy. Even though, it often leads to convergence in non-optimal local minima due to randomized initialization of weights. This paper introduces a Deep stacked autoencoder in which the output of one autoencoder has given into the input of another autoencoder along with input values. A single autoencoder does not sufficient to extract the complex relationship in features. So, these concatenated features of the stacked autoencoder help to focus on highly active features during training and testing. In addition, this approach helps to solve inefficient data issues. Finally, trained autoencoders have fine-tuned with the Adam optimizer, and emotions are classified by a softmax layer. The outcomes of the proposed methodology on the JAFFE dataset are significant, according to experiments. The proposed method achieved 82% of accuracy, 85% of Precision, 82% of Recall, and 81% of F1-score. Additionally, the performance of the stacked autoencoder has been examined using the reconstruction loss and roc curve.","PeriodicalId":164345,"journal":{"name":"2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIS56108.2023.10073937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning and computer vision research are still quite active in the field of facial emotion recognition (FER). It has been widely applied in several research areas but not limited to human-robot interaction, human psychology interaction detection, and learners’ emotion identification. In recent decades, facial expression recognition using deep learning has proven to be effective. This performance has been achieved by a good degree of self-learn kernels in the convolution layer which retains spatial information of images with higher accuracy. Even though, it often leads to convergence in non-optimal local minima due to randomized initialization of weights. This paper introduces a Deep stacked autoencoder in which the output of one autoencoder has given into the input of another autoencoder along with input values. A single autoencoder does not sufficient to extract the complex relationship in features. So, these concatenated features of the stacked autoencoder help to focus on highly active features during training and testing. In addition, this approach helps to solve inefficient data issues. Finally, trained autoencoders have fine-tuned with the Adam optimizer, and emotions are classified by a softmax layer. The outcomes of the proposed methodology on the JAFFE dataset are significant, according to experiments. The proposed method achieved 82% of accuracy, 85% of Precision, 82% of Recall, and 81% of F1-score. Additionally, the performance of the stacked autoencoder has been examined using the reconstruction loss and roc curve.