{"title":"Persian speech emotion recognition","authors":"Mohammad Savargiv, A. Bastanfard","doi":"10.1109/IKT.2015.7288756","DOIUrl":null,"url":null,"abstract":"Speech emotion recognition is one of the most challenging and the most interesting topics of the voice processing research in recent years. Performance enhancement and computational complexity mitigation are the subject matter of the current study. Current study proposes a speech emotion recognition method by employing HMM-based classifier and minimum number of features in the Persian language. Result illustrate the proposed method is able to recognizing eight emotional states of anger, happy, sadness, neutral, surprise, disgust, fear and boredom up to 79.50% average accuracy. In contrast to previous researches, the proposed method provides 8.72% improvement.","PeriodicalId":338953,"journal":{"name":"2015 7th Conference on Information and Knowledge Technology (IKT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 7th Conference on Information and Knowledge Technology (IKT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IKT.2015.7288756","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17
Abstract
Speech emotion recognition is one of the most challenging and the most interesting topics of the voice processing research in recent years. Performance enhancement and computational complexity mitigation are the subject matter of the current study. Current study proposes a speech emotion recognition method by employing HMM-based classifier and minimum number of features in the Persian language. Result illustrate the proposed method is able to recognizing eight emotional states of anger, happy, sadness, neutral, surprise, disgust, fear and boredom up to 79.50% average accuracy. In contrast to previous researches, the proposed method provides 8.72% improvement.