{"title":"Feature Selection in Acted Speech for the Creation of an Emotion Recognition Personalization Service","authors":"C. Anagnostopoulos","doi":"10.1109/SMAP.2008.34","DOIUrl":null,"url":null,"abstract":"One hundred thirty three (133) sound/speech features extracted from pitch, Mel frequency cepstral coefficients, energy and formants were evaluated in order to create a feature set sufficient to discriminate between seven emotions in acted speech. After the appropriate feature selection, multilayered perceptrons were trained for emotion recognition on the basis of a 23-input vector, which provide information about the prosody of the speaker over the entire sentence. Several experiments were performed and the results are presented analytically. Extra emphasis was given to assess the proposed 23-input vector in a speaker independent framework where speakers are not ¿known¿ to the classifier. The proposed feature vector achieved promising results (51%) for speaker independent recognition in seven emotion classes. Moreover, considering the problem of classifying high and low arousal emotions, our classifier reaches 86.8% successful recognition.","PeriodicalId":292389,"journal":{"name":"2008 Third International Workshop on Semantic Media Adaptation and Personalization","volume":"512 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Third International Workshop on Semantic Media Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMAP.2008.34","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
One hundred thirty three (133) sound/speech features extracted from pitch, Mel frequency cepstral coefficients, energy and formants were evaluated in order to create a feature set sufficient to discriminate between seven emotions in acted speech. After the appropriate feature selection, multilayered perceptrons were trained for emotion recognition on the basis of a 23-input vector, which provide information about the prosody of the speaker over the entire sentence. Several experiments were performed and the results are presented analytically. Extra emphasis was given to assess the proposed 23-input vector in a speaker independent framework where speakers are not ¿known¿ to the classifier. The proposed feature vector achieved promising results (51%) for speaker independent recognition in seven emotion classes. Moreover, considering the problem of classifying high and low arousal emotions, our classifier reaches 86.8% successful recognition.