{"title":"音频记录中的情感识别","authors":"I. Pavaloi, E. Musca, F. Rotaru","doi":"10.1109/ISSCS.2013.6651236","DOIUrl":null,"url":null,"abstract":"A novel approach to find the combination of acoustic features producing a more robust automatic recognition of a speaker emotion is proposed. Four discrete emotional states are classified. The emotional speech corpora used for training and evaluation are described in detail. The emotion recognition model using acoustic features is presented. The results achieved are presented and discussed.","PeriodicalId":260263,"journal":{"name":"International Symposium on Signals, Circuits and Systems ISSCS2013","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Emotion recognition in audio records\",\"authors\":\"I. Pavaloi, E. Musca, F. Rotaru\",\"doi\":\"10.1109/ISSCS.2013.6651236\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A novel approach to find the combination of acoustic features producing a more robust automatic recognition of a speaker emotion is proposed. Four discrete emotional states are classified. The emotional speech corpora used for training and evaluation are described in detail. The emotion recognition model using acoustic features is presented. The results achieved are presented and discussed.\",\"PeriodicalId\":260263,\"journal\":{\"name\":\"International Symposium on Signals, Circuits and Systems ISSCS2013\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Symposium on Signals, Circuits and Systems ISSCS2013\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSCS.2013.6651236\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposium on Signals, Circuits and Systems ISSCS2013","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSCS.2013.6651236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A novel approach to find the combination of acoustic features producing a more robust automatic recognition of a speaker emotion is proposed. Four discrete emotional states are classified. The emotional speech corpora used for training and evaluation are described in detail. The emotion recognition model using acoustic features is presented. The results achieved are presented and discussed.