{"title":"Enhanced speech emotion understanding using advanced attention-centric convolutional networks","authors":"Yingmei Qi , Heming Huang , Huiyun Zhang","doi":"10.1016/j.bspc.2025.107936","DOIUrl":null,"url":null,"abstract":"<div><div>Speech Emotion Recognition (SER) plays a crucial role in Human-Computer Interaction (HCI) systems, enabling machines to understand and respond to human emotional states. This paper presents an advanced framework leveraging feature fusion and deep learning architectures for robust SER. The proposed model integrates multi-features extracted using techniques such as MFCC, ZCR, and chroma. These features are augmented with statistical summaries including mean, maximum, and minimum values of MFCCs, enhancing the discriminative power of the input representation. The proposed deep learning architecture, Advanced Attention-Centric Convolutional Networks (AACCN), incorporates a hybrid approach combining Multi-Head Attention (MHA) mechanisms with Convolutional Neural Networks (CNNs). MHA is employed to capture intricate dependencies within the input sequences, while CNNs facilitate hierarchical feature learning and spatial modeling of temporal sequences. Batch normalization and dropout are applied to enhance model generalization and mitigate overfitting. Experimental results on benchmark datasets demonstrate that the proposed framework achieves state-of-the-art performance in SER tasks. Results show significant improvements in accuracy, precision, recall, and F1-score metrics compared to baseline models. The effectiveness of feature fusion and the synergy between MHA and CNNs highlight the robustness and scalability of the proposed AACCN model across diverse emotional contexts in speech signals.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"108 ","pages":"Article 107936"},"PeriodicalIF":4.9000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425004471","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Speech Emotion Recognition (SER) plays a crucial role in Human-Computer Interaction (HCI) systems, enabling machines to understand and respond to human emotional states. This paper presents an advanced framework leveraging feature fusion and deep learning architectures for robust SER. The proposed model integrates multi-features extracted using techniques such as MFCC, ZCR, and chroma. These features are augmented with statistical summaries including mean, maximum, and minimum values of MFCCs, enhancing the discriminative power of the input representation. The proposed deep learning architecture, Advanced Attention-Centric Convolutional Networks (AACCN), incorporates a hybrid approach combining Multi-Head Attention (MHA) mechanisms with Convolutional Neural Networks (CNNs). MHA is employed to capture intricate dependencies within the input sequences, while CNNs facilitate hierarchical feature learning and spatial modeling of temporal sequences. Batch normalization and dropout are applied to enhance model generalization and mitigate overfitting. Experimental results on benchmark datasets demonstrate that the proposed framework achieves state-of-the-art performance in SER tasks. Results show significant improvements in accuracy, precision, recall, and F1-score metrics compared to baseline models. The effectiveness of feature fusion and the synergy between MHA and CNNs highlight the robustness and scalability of the proposed AACCN model across diverse emotional contexts in speech signals.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.