Kyu-Seob Song, Young-Hoon Nho, Ju-Hwan Seo, D. Kwon
{"title":"基于多模态情感识别信息的决策级情感识别融合方法","authors":"Kyu-Seob Song, Young-Hoon Nho, Ju-Hwan Seo, D. Kwon","doi":"10.1109/URAI.2018.8441795","DOIUrl":null,"url":null,"abstract":"Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion method that takes the outputs of each recognizer as an input and confirms which combination of features achieves the highest accuracy. We used EdNet, which was developed in KAIST based Convolutional Neural Networks (CNNs), as a facial expression recognizer and a speech analytics engine developed for speech emotion recognition. Finally, we confirmed a higher accuracy 43.40% using an artificial neural network (ANN) or the k-Nearest Neighbor (k-NN) algorithm for classification of combinations of features from EdN et and the speech analytics engine.","PeriodicalId":347727,"journal":{"name":"2018 15th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Decision-Level Fusion Method for Emotion Recognition using Multimodal Emotion Recognition Information\",\"authors\":\"Kyu-Seob Song, Young-Hoon Nho, Ju-Hwan Seo, D. Kwon\",\"doi\":\"10.1109/URAI.2018.8441795\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion method that takes the outputs of each recognizer as an input and confirms which combination of features achieves the highest accuracy. We used EdNet, which was developed in KAIST based Convolutional Neural Networks (CNNs), as a facial expression recognizer and a speech analytics engine developed for speech emotion recognition. Finally, we confirmed a higher accuracy 43.40% using an artificial neural network (ANN) or the k-Nearest Neighbor (k-NN) algorithm for classification of combinations of features from EdN et and the speech analytics engine.\",\"PeriodicalId\":347727,\"journal\":{\"name\":\"2018 15th International Conference on Ubiquitous Robots (UR)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 15th International Conference on Ubiquitous Robots (UR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/URAI.2018.8441795\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th International Conference on Ubiquitous Robots (UR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/URAI.2018.8441795","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Decision-Level Fusion Method for Emotion Recognition using Multimodal Emotion Recognition Information
Human emotion recognition is an important factor for social robots. In previous research, emotion recognizers with many modalities have been studied, but there are several problems that make recognition rates lower when a recognizer is applied to a robot. This paper proposes a decision level fusion method that takes the outputs of each recognizer as an input and confirms which combination of features achieves the highest accuracy. We used EdNet, which was developed in KAIST based Convolutional Neural Networks (CNNs), as a facial expression recognizer and a speech analytics engine developed for speech emotion recognition. Finally, we confirmed a higher accuracy 43.40% using an artificial neural network (ANN) or the k-Nearest Neighbor (k-NN) algorithm for classification of combinations of features from EdN et and the speech analytics engine.