{"title":"基于TESS和IEMOCAP数据集的MFCC和机器学习的语音情感识别","authors":"Muhammad Zafar Iqbal","doi":"10.33897/FUJEAS.V1I2.321","DOIUrl":null,"url":null,"abstract":"Our proposed methodology involving MFCC computation along with support Vector machine is used to perform the task of Speech Emotion Recognition (SER) of collectively five emotions named Angry, Happy, Neutral, Pleasant Surprise and Sadness. Two databases are used for this purpose: Toronto Emotion Speech Set (TESS) and Interactive Emotional Dyadic Motion Capture (IEMOCAP). We achieved 97% accuracy with TESS and 86% accuracy with IEMOCAP respectively.","PeriodicalId":36255,"journal":{"name":"Iranian Journal of Botany","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"MFCC and Machine Learning Based Speech Emotion Recognition Over TESS and IEMOCAP Datasets\",\"authors\":\"Muhammad Zafar Iqbal\",\"doi\":\"10.33897/FUJEAS.V1I2.321\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Our proposed methodology involving MFCC computation along with support Vector machine is used to perform the task of Speech Emotion Recognition (SER) of collectively five emotions named Angry, Happy, Neutral, Pleasant Surprise and Sadness. Two databases are used for this purpose: Toronto Emotion Speech Set (TESS) and Interactive Emotional Dyadic Motion Capture (IEMOCAP). We achieved 97% accuracy with TESS and 86% accuracy with IEMOCAP respectively.\",\"PeriodicalId\":36255,\"journal\":{\"name\":\"Iranian Journal of Botany\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Iranian Journal of Botany\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.33897/FUJEAS.V1I2.321\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Environmental Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Iranian Journal of Botany","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33897/FUJEAS.V1I2.321","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Environmental Science","Score":null,"Total":0}
MFCC and Machine Learning Based Speech Emotion Recognition Over TESS and IEMOCAP Datasets
Our proposed methodology involving MFCC computation along with support Vector machine is used to perform the task of Speech Emotion Recognition (SER) of collectively five emotions named Angry, Happy, Neutral, Pleasant Surprise and Sadness. Two databases are used for this purpose: Toronto Emotion Speech Set (TESS) and Interactive Emotional Dyadic Motion Capture (IEMOCAP). We achieved 97% accuracy with TESS and 86% accuracy with IEMOCAP respectively.