T_SRNET: A multimodal model based on convolutional neural network for emotional speech enhancement

IF 6.2 2区 工程技术 Q1 ENGINEERING, MULTIDISCIPLINARY
Shaoqiang Wang , Lei Feng , Li Zhang
{"title":"T_SRNET: A multimodal model based on convolutional neural network for emotional speech enhancement","authors":"Shaoqiang Wang ,&nbsp;Lei Feng ,&nbsp;Li Zhang","doi":"10.1016/j.aej.2025.03.071","DOIUrl":null,"url":null,"abstract":"<div><div>Speech classification is a technology that can determine the emotional state conveyed by speech. It can support emotion-related applications and improve the human–computer interaction experience. However, the lack of high-quality speech annotation datasets makes it difficult for many models to provide sufficient data for training, resulting in poor model generalization performance. It is necessary to obtain more high-quality speech annotation datasets through the high-precision model. For example, there are many human emotional data in the image dataset that can be utilized to assist in speech emotional information recognition. In this study, a multimodal algorithm T_SRNET is proposed, which can assist speech emotion recognition by extracting image emotion features and converting them into spectrograms. Firstly, the face image data with emotions such as joy and sadness are transformed into the corresponding phonograms by the diffusion model. Secondly, the features can be extracted by using the speech feature extraction network SRNET based on the improved transform structure. Finally, the speech signal features are extracted, and the two features are fused before the decision is made to output the results. After ablation and contrast experiments, the accuracy of CREMA-D and IEMOCAP was improved by 2% and 1% respectively. Also it can be evaluated that the proposed model in this study can correlate image data with speech data, improve the quality of speech data tagging and enhance the performance of speech recognition.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"124 ","pages":"Pages 573-581"},"PeriodicalIF":6.2000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825003795","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Speech classification is a technology that can determine the emotional state conveyed by speech. It can support emotion-related applications and improve the human–computer interaction experience. However, the lack of high-quality speech annotation datasets makes it difficult for many models to provide sufficient data for training, resulting in poor model generalization performance. It is necessary to obtain more high-quality speech annotation datasets through the high-precision model. For example, there are many human emotional data in the image dataset that can be utilized to assist in speech emotional information recognition. In this study, a multimodal algorithm T_SRNET is proposed, which can assist speech emotion recognition by extracting image emotion features and converting them into spectrograms. Firstly, the face image data with emotions such as joy and sadness are transformed into the corresponding phonograms by the diffusion model. Secondly, the features can be extracted by using the speech feature extraction network SRNET based on the improved transform structure. Finally, the speech signal features are extracted, and the two features are fused before the decision is made to output the results. After ablation and contrast experiments, the accuracy of CREMA-D and IEMOCAP was improved by 2% and 1% respectively. Also it can be evaluated that the proposed model in this study can correlate image data with speech data, improve the quality of speech data tagging and enhance the performance of speech recognition.
T_SRNET:一种基于卷积神经网络的情感语音增强多模态模型
语音分类是一种能够确定语音所传达的情绪状态的技术。它可以支持与情感相关的应用程序,并改善人机交互体验。然而,由于缺乏高质量的语音标注数据集,使得许多模型难以提供足够的训练数据,导致模型泛化性能较差。需要通过高精度模型获得更多高质量的语音标注数据集。例如,图像数据集中有许多人类情感数据,可以用来辅助语音情感信息识别。本研究提出了一种多模态算法T_SRNET,该算法通过提取图像情感特征并将其转换为频谱图来辅助语音情感识别。首先,通过扩散模型将带有喜悦、悲伤等情绪的人脸图像数据转化为相应的声像图;其次,利用基于改进变换结构的语音特征提取网络SRNET进行特征提取;最后,提取语音信号特征,并将两种特征融合,然后决定输出结果。经过消融和对比实验,CREMA-D和IEMOCAP的准确率分别提高了2%和1%。该模型能够将图像数据与语音数据进行关联,提高语音数据标注的质量,提高语音识别的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
alexandria engineering journal
alexandria engineering journal Engineering-General Engineering
CiteScore
11.20
自引率
4.40%
发文量
1015
审稿时长
43 days
期刊介绍: Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification: • Mechanical, Production, Marine and Textile Engineering • Electrical Engineering, Computer Science and Nuclear Engineering • Civil and Architecture Engineering • Chemical Engineering and Applied Sciences • Environmental Engineering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信