{"title":"Predictive human emotion recognition system using deep functional affective state modeling","authors":"Raja Majid Mehmood, Hyung-Jeong Yang, Sun-Hee Kim","doi":"10.1145/3373477.3373706","DOIUrl":null,"url":null,"abstract":"Emotions and humans are closely related to each other as emotion alleviate adaptive response to environmental changes and can act as a manner of communication about what is important to us. Emotions can be expressed through facial expressions, words, voice or speech articulation thus allowing us to conceive the emotional state of other individual and communicate with them in the best of our behavior. Emotion recognition is a process to classify different affective states of a human brain. It is a method through which we can analyze the emotive response to certain stimuli and develop human-computer interaction applications. Deep Learning algorithms recently gained attention for their accuracy, precision, speed and real time implementation. Emotion recognition has proved to be quite challenging because of its spectral-temporal pattern problems. In this study we propose a Deep Functional Affective State Model (DFASM) predictive model based on convolutional long short-term memory (ConvLSTM) using margin-based loss function. We evaluate the influence of eight emotional responses. The loss function used in this method observes more specific feelings during the training phase and allows the model to be more confident. The model is tested on public dataset (DEAP) and we recorded an increase up to 79% in the accuracy. Our proposed model is capable of capturing spatial-temporal data while learning, which helps in better emotional recognition. The proposed model was tested by using a public dataset (DEAP) and it outperformed other state-of-the-art methods.","PeriodicalId":300431,"journal":{"name":"Proceedings of the 1st International Conference on Advanced Information Science and System","volume":"384 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Conference on Advanced Information Science and System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3373477.3373706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Emotions and humans are closely related to each other as emotion alleviate adaptive response to environmental changes and can act as a manner of communication about what is important to us. Emotions can be expressed through facial expressions, words, voice or speech articulation thus allowing us to conceive the emotional state of other individual and communicate with them in the best of our behavior. Emotion recognition is a process to classify different affective states of a human brain. It is a method through which we can analyze the emotive response to certain stimuli and develop human-computer interaction applications. Deep Learning algorithms recently gained attention for their accuracy, precision, speed and real time implementation. Emotion recognition has proved to be quite challenging because of its spectral-temporal pattern problems. In this study we propose a Deep Functional Affective State Model (DFASM) predictive model based on convolutional long short-term memory (ConvLSTM) using margin-based loss function. We evaluate the influence of eight emotional responses. The loss function used in this method observes more specific feelings during the training phase and allows the model to be more confident. The model is tested on public dataset (DEAP) and we recorded an increase up to 79% in the accuracy. Our proposed model is capable of capturing spatial-temporal data while learning, which helps in better emotional recognition. The proposed model was tested by using a public dataset (DEAP) and it outperformed other state-of-the-art methods.