{"title":"Research on the Phonetic Emotion Recognition Model of Mandarin Chinese","authors":"Shuchun Li, Shaobin Li, Yang Wang, Xiaoye Zhang","doi":"10.1109/ICCST50977.2020.00124","DOIUrl":null,"url":null,"abstract":"In recent years, Emotion Recognition (AVER) has become more and more important in the field of human-computer interaction. Due to certain defects in single-modal information, we complemented audio and visual information to perform multi-modal emotion recognition. At the same time, the choice of different classifiers has different accuracy in the emotion classification experiment. Therefore, in this paper, we introduce a multi-modal emotion recognition system. After obtaining multi-modal features, use different classifiers for learning and training, and obtain Multi Layer Perceptron Classifier, Logistic Regression, Support Vector Classifier and Linear Discriminant Analysis four classifiers with high accuracy for multi-modal emotion recognition. This paper explains the work of each part of the multimodal emotion recognition system, focusing on the performance comparison of classifiers in emotion recognition.","PeriodicalId":189809,"journal":{"name":"2020 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Culture-oriented Science & Technology (ICCST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCST50977.2020.00124","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, Emotion Recognition (AVER) has become more and more important in the field of human-computer interaction. Due to certain defects in single-modal information, we complemented audio and visual information to perform multi-modal emotion recognition. At the same time, the choice of different classifiers has different accuracy in the emotion classification experiment. Therefore, in this paper, we introduce a multi-modal emotion recognition system. After obtaining multi-modal features, use different classifiers for learning and training, and obtain Multi Layer Perceptron Classifier, Logistic Regression, Support Vector Classifier and Linear Discriminant Analysis four classifiers with high accuracy for multi-modal emotion recognition. This paper explains the work of each part of the multimodal emotion recognition system, focusing on the performance comparison of classifiers in emotion recognition.