Li Wern Chew, K. Seng, L. Ang, Vish Ramakonar, Amalan Gnanasegaran
{"title":"Audio-Emotion Recognition System Using Parallel Classifiers and Audio Feature Analyzer","authors":"Li Wern Chew, K. Seng, L. Ang, Vish Ramakonar, Amalan Gnanasegaran","doi":"10.1109/CIMSIM.2011.44","DOIUrl":null,"url":null,"abstract":"Emotion recognition based on an audio signal is an area of active research in the domain of human-computer interaction and effective computing. This paper presents an audio-emotion recognition (AER) system using parallel classifiers and an audio feature analyzer. In the proposed system, audio features such as the pitch and fractional cepstral coefficient are first extracted from the audio signal for analysis. These extracted features are then used to train a radial basis function. Lastly, an audio feature analyzer is used to enhance the performance of the recognition rate. The latest simulation results show that the proposed AER system is able to achieve an emotion recognition rate of 81.67%.","PeriodicalId":125671,"journal":{"name":"2011 Third International Conference on Computational Intelligence, Modelling & Simulation","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 Third International Conference on Computational Intelligence, Modelling & Simulation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIMSIM.2011.44","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17
Abstract
Emotion recognition based on an audio signal is an area of active research in the domain of human-computer interaction and effective computing. This paper presents an audio-emotion recognition (AER) system using parallel classifiers and an audio feature analyzer. In the proposed system, audio features such as the pitch and fractional cepstral coefficient are first extracted from the audio signal for analysis. These extracted features are then used to train a radial basis function. Lastly, an audio feature analyzer is used to enhance the performance of the recognition rate. The latest simulation results show that the proposed AER system is able to achieve an emotion recognition rate of 81.67%.