Speech emotion recognition using multi resolution Hilbert transform based spectral and entropy features

IF 3.4 2区 物理与天体物理 Q1 ACOUSTICS
Siba Prasad Mishra, Pankaj Warule, Suman Deb
{"title":"Speech emotion recognition using multi resolution Hilbert transform based spectral and entropy features","authors":"Siba Prasad Mishra,&nbsp;Pankaj Warule,&nbsp;Suman Deb","doi":"10.1016/j.apacoust.2024.110403","DOIUrl":null,"url":null,"abstract":"<div><div>Speech emotion recognition (SER) is essential for addressing many personal and professional challenges in our everyday lives. The application of SER has shown potential in a number of domains, such as medical intervention, fortification of security systems, online marketing and educational platforms, personal communication, strengthening of devices and human interaction, and numerous other domains. Due to its extensive variety of applications, this subject has attracted the attention of several researchers for more than three decades. The performance of SER can be improved by adopting a suitable methodology for extracting the feature and using it to classify speech emotion. In our study, we used a novel technique known as the multi-resolution Hilbert transform (MRHT) method to extract the speech feature. We used the multi-resolution signal decomposition (MRSD) method to break down the speech signal frame (SSF) into a number of sub-frequency band signals, which are called modes or intrinsic mode functions (IMFs). Then, Hilbert transform (HT) is applied to each IMF signal to find the MRHT-based instantaneous amplitude (MRHIA) and MRHT-based instantaneous frequency (MRHIF) signal vectors. Features such as MRHT-based approximate entropy (MRHAE), MRHT-based permutation entropy (MRHPE), MRHT-based increment entropy (MRHIE), MRHT-based spectral entropy (MRHSE), and MRHT-based sample entropy (MRHSME) were calculated using each MRHIA and MRHIF signal vectors and the mel frequency cepstral coefficient (MFCC) feature extracted using the speech signals. The combinations of the proposed MRHT-based features (MRHAE + MRHPE + MRHIE + MRHSE + MRHSME) are known as the MRHT-based entropy feature (MRHEF). Subsequently, the MRHEF and MFCC features are used both alone and in conjunction to categorize speech emotion using a deep neural network (DNN) classifier. This results in emotion classification accuracies of 89.67%, 85.42%, and 83.48% for the EMO-DB, EMOVO, and SAVEE datasets, respectively. Comparing our experimental results with the other approaches, we found that the proposed feature combinations (MFCC + MRHEF) using a DNN classifier outperformed the state-of-the-art methods in SER.</div></div>","PeriodicalId":55506,"journal":{"name":"Applied Acoustics","volume":"229 ","pages":"Article 110403"},"PeriodicalIF":3.4000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Acoustics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0003682X24005541","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ACOUSTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Speech emotion recognition (SER) is essential for addressing many personal and professional challenges in our everyday lives. The application of SER has shown potential in a number of domains, such as medical intervention, fortification of security systems, online marketing and educational platforms, personal communication, strengthening of devices and human interaction, and numerous other domains. Due to its extensive variety of applications, this subject has attracted the attention of several researchers for more than three decades. The performance of SER can be improved by adopting a suitable methodology for extracting the feature and using it to classify speech emotion. In our study, we used a novel technique known as the multi-resolution Hilbert transform (MRHT) method to extract the speech feature. We used the multi-resolution signal decomposition (MRSD) method to break down the speech signal frame (SSF) into a number of sub-frequency band signals, which are called modes or intrinsic mode functions (IMFs). Then, Hilbert transform (HT) is applied to each IMF signal to find the MRHT-based instantaneous amplitude (MRHIA) and MRHT-based instantaneous frequency (MRHIF) signal vectors. Features such as MRHT-based approximate entropy (MRHAE), MRHT-based permutation entropy (MRHPE), MRHT-based increment entropy (MRHIE), MRHT-based spectral entropy (MRHSE), and MRHT-based sample entropy (MRHSME) were calculated using each MRHIA and MRHIF signal vectors and the mel frequency cepstral coefficient (MFCC) feature extracted using the speech signals. The combinations of the proposed MRHT-based features (MRHAE + MRHPE + MRHIE + MRHSE + MRHSME) are known as the MRHT-based entropy feature (MRHEF). Subsequently, the MRHEF and MFCC features are used both alone and in conjunction to categorize speech emotion using a deep neural network (DNN) classifier. This results in emotion classification accuracies of 89.67%, 85.42%, and 83.48% for the EMO-DB, EMOVO, and SAVEE datasets, respectively. Comparing our experimental results with the other approaches, we found that the proposed feature combinations (MFCC + MRHEF) using a DNN classifier outperformed the state-of-the-art methods in SER.
利用基于频谱和熵特征的多分辨率希尔伯特变换进行语音情感识别
语音情感识别(SER)对于解决我们日常生活中的许多个人和职业挑战至关重要。语音情感识别的应用已在许多领域显示出潜力,如医疗干预、强化安全系统、在线营销和教育平台、个人通信、加强设备和人机交互,以及许多其他领域。由于其应用范围广泛,三十多年来,这一课题吸引了众多研究人员的关注。通过采用合适的方法提取特征并用于语音情感分类,可以提高 SER 的性能。在我们的研究中,我们使用了一种称为多分辨率希尔伯特变换(MRHT)方法的新技术来提取语音特征。我们使用多分辨率信号分解(MRSD)方法将语音信号帧(SSF)分解成若干子频带信号,这些信号被称为模式或固有模式函数(IMF)。然后,对每个 IMF 信号进行希尔伯特变换(HT),以找到基于 MRHT 的瞬时振幅(MRHIA)和基于 MRHT 的瞬时频率(MRHIF)信号向量。利用每个 MRHIA 和 MRHIF 信号向量计算基于 MRHT 的近似熵 (MRHAE)、基于 MRHT 的置换熵 (MRHPE)、基于 MRHT 的增量熵 (MRHIE)、基于 MRHT 的频谱熵 (MRHSE) 和基于 MRHT 的样本熵 (MRHSME),以及利用语音信号提取的麦尔频率倒谱系数 (MFCC) 特征。所提出的基于 MRHT 的特征组合(MRHAE + MRHPE + MRHIE + MRHSE + MRHSME)被称为基于 MRHT 的熵特征(MRHEF)。随后,利用深度神经网络 (DNN) 分类器,将 MRHEF 和 MFCC 特征单独或结合使用,对语音进行情感分类。这使得 EMO-DB、EMOVO 和 SAVEE 数据集的情感分类准确率分别达到 89.67%、85.42% 和 83.48%。将我们的实验结果与其他方法进行比较后发现,使用 DNN 分类器的拟议特征组合(MFCC + MRHEF)在 SER 中的表现优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied Acoustics
Applied Acoustics 物理-声学
CiteScore
7.40
自引率
11.80%
发文量
618
审稿时长
7.5 months
期刊介绍: Since its launch in 1968, Applied Acoustics has been publishing high quality research papers providing state-of-the-art coverage of research findings for engineers and scientists involved in applications of acoustics in the widest sense. Applied Acoustics looks not only at recent developments in the understanding of acoustics but also at ways of exploiting that understanding. The Journal aims to encourage the exchange of practical experience through publication and in so doing creates a fund of technological information that can be used for solving related problems. The presentation of information in graphical or tabular form is especially encouraged. If a report of a mathematical development is a necessary part of a paper it is important to ensure that it is there only as an integral part of a practical solution to a problem and is supported by data. Applied Acoustics encourages the exchange of practical experience in the following ways: • Complete Papers • Short Technical Notes • Review Articles; and thereby provides a wealth of technological information that can be used to solve related problems. Manuscripts that address all fields of applications of acoustics ranging from medicine and NDT to the environment and buildings are welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信