{"title":"Development of Speech Emotion Recognition Algorithm using MFCC and Prosody","authors":"Hyejin Koo, S. Jeong, Sungjae Yoon, Wonjong Kim","doi":"10.1109/ICEIC49074.2020.9051281","DOIUrl":null,"url":null,"abstract":"Recently, in the field of Human Computer Interaction (HCI), speech emotion recognition (SER) is a highly challenging work. Various models have been proposed for better performance. In this paper, we use GRU model, which achieves comparably high performance with less parameters. We used not only MFCC, delta, and acceleration, but also delta of acceleration. Additionally, we propose the novel input feature that captures their pair simultaneously. Furthermore, we applied the prosody, the low-level feature of speech, for every step in GRU cell with MFCC feature. Our model obtained 64.47% of weighted accuracy, using only audio input from both of improvised and scripted data in IEMOCAP.","PeriodicalId":271345,"journal":{"name":"2020 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"265 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Electronics, Information, and Communication (ICEIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEIC49074.2020.9051281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Recently, in the field of Human Computer Interaction (HCI), speech emotion recognition (SER) is a highly challenging work. Various models have been proposed for better performance. In this paper, we use GRU model, which achieves comparably high performance with less parameters. We used not only MFCC, delta, and acceleration, but also delta of acceleration. Additionally, we propose the novel input feature that captures their pair simultaneously. Furthermore, we applied the prosody, the low-level feature of speech, for every step in GRU cell with MFCC feature. Our model obtained 64.47% of weighted accuracy, using only audio input from both of improvised and scripted data in IEMOCAP.