Research on the Phonetic Emotion Recognition Model of Mandarin Chinese

Shuchun Li, Shaobin Li, Yang Wang, Xiaoye Zhang
{"title":"Research on the Phonetic Emotion Recognition Model of Mandarin Chinese","authors":"Shuchun Li, Shaobin Li, Yang Wang, Xiaoye Zhang","doi":"10.1109/ICCST50977.2020.00124","DOIUrl":null,"url":null,"abstract":"In recent years, Emotion Recognition (AVER) has become more and more important in the field of human-computer interaction. Due to certain defects in single-modal information, we complemented audio and visual information to perform multi-modal emotion recognition. At the same time, the choice of different classifiers has different accuracy in the emotion classification experiment. Therefore, in this paper, we introduce a multi-modal emotion recognition system. After obtaining multi-modal features, use different classifiers for learning and training, and obtain Multi Layer Perceptron Classifier, Logistic Regression, Support Vector Classifier and Linear Discriminant Analysis four classifiers with high accuracy for multi-modal emotion recognition. This paper explains the work of each part of the multimodal emotion recognition system, focusing on the performance comparison of classifiers in emotion recognition.","PeriodicalId":189809,"journal":{"name":"2020 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Culture-oriented Science & Technology (ICCST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCST50977.2020.00124","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, Emotion Recognition (AVER) has become more and more important in the field of human-computer interaction. Due to certain defects in single-modal information, we complemented audio and visual information to perform multi-modal emotion recognition. At the same time, the choice of different classifiers has different accuracy in the emotion classification experiment. Therefore, in this paper, we introduce a multi-modal emotion recognition system. After obtaining multi-modal features, use different classifiers for learning and training, and obtain Multi Layer Perceptron Classifier, Logistic Regression, Support Vector Classifier and Linear Discriminant Analysis four classifiers with high accuracy for multi-modal emotion recognition. This paper explains the work of each part of the multimodal emotion recognition system, focusing on the performance comparison of classifiers in emotion recognition.
普通话语音情感识别模型研究
近年来,情感识别在人机交互领域中发挥着越来越重要的作用。由于单模态信息存在一定的缺陷,我们将音频和视觉信息相结合进行多模态情感识别。同时,在情绪分类实验中,不同分类器的选择具有不同的准确率。因此,本文介绍了一种多模态情感识别系统。在获得多模态特征后,使用不同的分类器进行学习和训练,得到多层感知器分类器、逻辑回归、支持向量分类器和线性判别分析四种准确率较高的多模态情感识别分类器。本文阐述了多模态情感识别系统各部分的工作,重点对分类器在情感识别中的性能进行了比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信