Mengna Gao, Jing Dong, D. Zhou, Xiaopeng Wei, Qiang Zhang
{"title":"基于卷积神经网络和特征融合的语音情感识别","authors":"Mengna Gao, Jing Dong, D. Zhou, Xiaopeng Wei, Qiang Zhang","doi":"10.1109/ISKE47853.2019.9170369","DOIUrl":null,"url":null,"abstract":"In vieiv of the remarkable achievements of convolutional neural network in the field of computer vision, We propose a speech emotion recognition algorithm based on convolution neural network and feature fusion, Which extracts features from the original speech signal and its spectrogram for recognition. From the point of vieiv of feature enhancement, the features extracted from ID-CNN and 2D-CNN tivo models are fused by dimension splicing in this algorithm, and then the fused features are sent to the 2D-CNN model again to train. This Way of feature fusion makes better use of the emotional information of speech signal in time domain and frequency domain, and gives full play to the advantages of onedimensional convolution and tivo-dimensional convolution, in the three classified emotional recognition experiments of four databases, EMODB, CASIA, IEMOCAP and CHEAVD, the recognition rates of 91.6%, 96.5%, 80.5% and 62.7% Were obtained respectively, Which are the optimal recognition results in all the algorithms We proposed.","PeriodicalId":399084,"journal":{"name":"2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Speech Emotion Recognition Based on Convolutional Neural Network and Feature Fusion\",\"authors\":\"Mengna Gao, Jing Dong, D. Zhou, Xiaopeng Wei, Qiang Zhang\",\"doi\":\"10.1109/ISKE47853.2019.9170369\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In vieiv of the remarkable achievements of convolutional neural network in the field of computer vision, We propose a speech emotion recognition algorithm based on convolution neural network and feature fusion, Which extracts features from the original speech signal and its spectrogram for recognition. From the point of vieiv of feature enhancement, the features extracted from ID-CNN and 2D-CNN tivo models are fused by dimension splicing in this algorithm, and then the fused features are sent to the 2D-CNN model again to train. This Way of feature fusion makes better use of the emotional information of speech signal in time domain and frequency domain, and gives full play to the advantages of onedimensional convolution and tivo-dimensional convolution, in the three classified emotional recognition experiments of four databases, EMODB, CASIA, IEMOCAP and CHEAVD, the recognition rates of 91.6%, 96.5%, 80.5% and 62.7% Were obtained respectively, Which are the optimal recognition results in all the algorithms We proposed.\",\"PeriodicalId\":399084,\"journal\":{\"name\":\"2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)\",\"volume\":\"128 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISKE47853.2019.9170369\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISKE47853.2019.9170369","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Speech Emotion Recognition Based on Convolutional Neural Network and Feature Fusion
In vieiv of the remarkable achievements of convolutional neural network in the field of computer vision, We propose a speech emotion recognition algorithm based on convolution neural network and feature fusion, Which extracts features from the original speech signal and its spectrogram for recognition. From the point of vieiv of feature enhancement, the features extracted from ID-CNN and 2D-CNN tivo models are fused by dimension splicing in this algorithm, and then the fused features are sent to the 2D-CNN model again to train. This Way of feature fusion makes better use of the emotional information of speech signal in time domain and frequency domain, and gives full play to the advantages of onedimensional convolution and tivo-dimensional convolution, in the three classified emotional recognition experiments of four databases, EMODB, CASIA, IEMOCAP and CHEAVD, the recognition rates of 91.6%, 96.5%, 80.5% and 62.7% Were obtained respectively, Which are the optimal recognition results in all the algorithms We proposed.