Vocal emotion recognition in five languages of Assam using features based on MFCCs and Eigen Values of Autocorrelation Matrix in presence of babble noise
{"title":"Vocal emotion recognition in five languages of Assam using features based on MFCCs and Eigen Values of Autocorrelation Matrix in presence of babble noise","authors":"A. B. Kandali, A. Routray, T. Basu","doi":"10.1109/NCC.2010.5430205","DOIUrl":null,"url":null,"abstract":"This work investigates whether vocal emotion expressions of (i) discrete emotion be distinguished from ‘no-emotion’ (i.e. neutral), (ii) one discrete emotion be distinguished from another, (iii) surprise, which is actually a cognitive component that could be present with any emotion, be also recognized as distinct emotion, (iv) discrete emotion be recognized cross-lingually. This study will enable us to get more information regarding nature and function of emotion. Furthermore, this work will help in developing a generalized vocal emotion recognition system, which will increase the efficiency of human-machine interaction systems. In this work, an emotional speech database consisting of short sentences of six full-blown basic emotions and neutral is created with 140 simulated utterances per speaker of five native languages of Assam. This database is validated by a Listening Test. A new feature set is proposed based on Eigen Values of Autocorrelation Matrix (EVAM) of each frame of the speech signal. The Gaussian Mixture Model (GMM) is used as classifier. The performance of the proposed feature set is compared with Mel Frequency Cepstral Coefficients (MFCCs) at sampling frequency of 8.1 kHz and with additive babble noise of 5 db and 0 db Signal-to-Noise Ratios (SNRs) under matched noise training and testing condition.","PeriodicalId":130953,"journal":{"name":"2010 National Conference On Communications (NCC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 National Conference On Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC.2010.5430205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
This work investigates whether vocal emotion expressions of (i) discrete emotion be distinguished from ‘no-emotion’ (i.e. neutral), (ii) one discrete emotion be distinguished from another, (iii) surprise, which is actually a cognitive component that could be present with any emotion, be also recognized as distinct emotion, (iv) discrete emotion be recognized cross-lingually. This study will enable us to get more information regarding nature and function of emotion. Furthermore, this work will help in developing a generalized vocal emotion recognition system, which will increase the efficiency of human-machine interaction systems. In this work, an emotional speech database consisting of short sentences of six full-blown basic emotions and neutral is created with 140 simulated utterances per speaker of five native languages of Assam. This database is validated by a Listening Test. A new feature set is proposed based on Eigen Values of Autocorrelation Matrix (EVAM) of each frame of the speech signal. The Gaussian Mixture Model (GMM) is used as classifier. The performance of the proposed feature set is compared with Mel Frequency Cepstral Coefficients (MFCCs) at sampling frequency of 8.1 kHz and with additive babble noise of 5 db and 0 db Signal-to-Noise Ratios (SNRs) under matched noise training and testing condition.