Speaker Recognition in Emotional Environment using Excitation Features

T. Thomas, S. V, N. Sobhana, S. Koolagudi
{"title":"Speaker Recognition in Emotional Environment using Excitation Features","authors":"T. Thomas, S. V, N. Sobhana, S. Koolagudi","doi":"10.1109/ICAECC50550.2020.9339501","DOIUrl":null,"url":null,"abstract":"Speaker Recognition is known as the task of recognizing the person speaking from his/her speech. Speaker recognition has many applications including transaction authentication, access control, voice dialing, web services, etc. Emotive speaker recognition is important because in real life, human beings extensively express emotions during conversations, and emotions alter the human voice. A text-independent speaker recognition system is proposed in the work. The system designed is for emotional environment. The proposed system in this work is trained using the speech samples recorded in neutral environment and the system evaluation is performed in an emotional environment. Here, excitation source features are used to represent speaker-specific details contained in speech signal. The excitation source signal is obtained after separating the segmental level features from the voice samples. The excitation source signal is almost considered as a noise so identifying a speaker in an emotive environment is a challenging task. Excitation features include Linear Prediction (LP) residual, Glottal Closure Instance (GCI), LP residual phase, residual cepstrum, Residual Mel-Frequency Cepstral Coefficient (R-MFCC), etc. A decrease in performance is observed when the system is trained with neutral speech samples and tested with emotional speech samples. Different emotions considered for emotional speaker identification are happy, sad, anger, fear, neutral, surprise, disgust, and sarcastic For the classification of speakers the algorithms used are Gaussian Mixture Model (GMM), Support Vector Machine (SVM), K-Nearest Neighbor(KNN), Random Forest and Naive Bayes.","PeriodicalId":196343,"journal":{"name":"2020 Third International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Third International Conference on Advances in Electronics, Computers and Communications (ICAECC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAECC50550.2020.9339501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Speaker Recognition is known as the task of recognizing the person speaking from his/her speech. Speaker recognition has many applications including transaction authentication, access control, voice dialing, web services, etc. Emotive speaker recognition is important because in real life, human beings extensively express emotions during conversations, and emotions alter the human voice. A text-independent speaker recognition system is proposed in the work. The system designed is for emotional environment. The proposed system in this work is trained using the speech samples recorded in neutral environment and the system evaluation is performed in an emotional environment. Here, excitation source features are used to represent speaker-specific details contained in speech signal. The excitation source signal is obtained after separating the segmental level features from the voice samples. The excitation source signal is almost considered as a noise so identifying a speaker in an emotive environment is a challenging task. Excitation features include Linear Prediction (LP) residual, Glottal Closure Instance (GCI), LP residual phase, residual cepstrum, Residual Mel-Frequency Cepstral Coefficient (R-MFCC), etc. A decrease in performance is observed when the system is trained with neutral speech samples and tested with emotional speech samples. Different emotions considered for emotional speaker identification are happy, sad, anger, fear, neutral, surprise, disgust, and sarcastic For the classification of speakers the algorithms used are Gaussian Mixture Model (GMM), Support Vector Machine (SVM), K-Nearest Neighbor(KNN), Random Forest and Naive Bayes.
情绪环境下基于激励特征的说话人识别
说话人识别是一项从说话人的讲话中识别说话人的任务。说话人识别在事务认证、访问控制、语音拨号、web服务等方面有着广泛的应用。识别说话人的情绪很重要,因为在现实生活中,人类在谈话中广泛地表达情绪,而情绪会改变人类的声音。本文提出了一种与文本无关的说话人识别系统。该系统是为情感环境而设计的。本文提出的系统使用在中性环境中记录的语音样本进行训练,并在情绪环境中进行系统评估。在这里,激励源特征被用来表示语音信号中包含的特定于说话人的细节。从语音样本中分离出分段电平特征后得到激励源信号。激励源信号几乎被认为是噪声,因此在情绪环境中识别说话人是一项具有挑战性的任务。激励特征包括线性预测(LP)残差、声门关闭实例(GCI)、LP残差相位、残差倒谱、残差Mel-Frequency倒谱系数(R-MFCC)等。当使用中性语音样本进行训练并使用情绪语音样本进行测试时,可以观察到系统性能的下降。对于说话者的分类,使用的算法有高斯混合模型(GMM)、支持向量机(SVM)、k -近邻(KNN)、随机森林和朴素贝叶斯。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信