Human emotion recognition based on time–frequency analysis of multivariate EEG signal

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Padhmashree V., Abhijit Bhattacharyya
{"title":"Human emotion recognition based on time–frequency analysis of multivariate EEG signal","authors":"Padhmashree V.,&nbsp;Abhijit Bhattacharyya","doi":"10.1016/j.knosys.2021.107867","DOIUrl":null,"url":null,"abstract":"<div><p><span>Understanding the expression of human emotional states plays a prominent role in interactive multimodal interfaces, </span>affective computing<span><span>, and the healthcare sector<span>. Emotion recognition through electroencephalogram (EEG) signals is a simple, inexpensive, compact, and precise solution. This paper proposes a novel four-stage method for human emotion recognition using multivariate EEG signals. In the first stage, multivariate variational mode decomposition<span><span> (MVMD) is employed to extract an ensemble of multivariate modulated oscillations (MMOs) from multichannel<span> EEG signals. In the second stage, multivariate time–frequency (TF) images are generated using joint </span></span>instantaneous amplitude<span><span> (JIA), and joint instantaneous frequency (JIF) functions computed from the extracted MMOs. In the next stage, deep residual </span>convolutional neural network ResNet-18 is customized to extract hidden features from the TF images. Finally, the classification is performed by the softmax layer. To further evaluate the performance of the model, various </span></span></span></span>machine learning (ML) classifiers are employed. The feasibility and validity of the proposed method are verified using two different public emotion EEG datasets. The experimental results demonstrate that the proposed method outperforms the state-of-the-art emotion recognition methods with the best accuracy of 99.03, 97.59, and 97.75 percent for classifying arousal, dominance, and valence emotions, respectively. Our study reveals that TF-based multivariate EEG signal analysis using a deep residual network achieves superior performance in human emotion recognition.</span></p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"238 ","pages":"Article 107867"},"PeriodicalIF":7.6000,"publicationDate":"2022-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"42","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705121010455","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 42

Abstract

Understanding the expression of human emotional states plays a prominent role in interactive multimodal interfaces, affective computing, and the healthcare sector. Emotion recognition through electroencephalogram (EEG) signals is a simple, inexpensive, compact, and precise solution. This paper proposes a novel four-stage method for human emotion recognition using multivariate EEG signals. In the first stage, multivariate variational mode decomposition (MVMD) is employed to extract an ensemble of multivariate modulated oscillations (MMOs) from multichannel EEG signals. In the second stage, multivariate time–frequency (TF) images are generated using joint instantaneous amplitude (JIA), and joint instantaneous frequency (JIF) functions computed from the extracted MMOs. In the next stage, deep residual convolutional neural network ResNet-18 is customized to extract hidden features from the TF images. Finally, the classification is performed by the softmax layer. To further evaluate the performance of the model, various machine learning (ML) classifiers are employed. The feasibility and validity of the proposed method are verified using two different public emotion EEG datasets. The experimental results demonstrate that the proposed method outperforms the state-of-the-art emotion recognition methods with the best accuracy of 99.03, 97.59, and 97.75 percent for classifying arousal, dominance, and valence emotions, respectively. Our study reveals that TF-based multivariate EEG signal analysis using a deep residual network achieves superior performance in human emotion recognition.

基于多元EEG信号时频分析的人类情绪识别
了解人类情绪状态的表达在交互式多模式界面、情感计算和医疗保健领域发挥着重要作用。通过脑电图(EEG)信号进行情绪识别是一种简单、廉价、紧凑和精确的解决方案。本文提出了一种利用多元脑电信号进行人类情绪识别的新的四阶段方法。在第一阶段,采用多元变分模式分解(MVMD)从多通道EEG信号中提取多元调制振荡(MMO)的集合。在第二阶段,使用从提取的MMO计算的联合瞬时振幅(JIA)和联合瞬时频率(JIF)函数生成多变量时频(TF)图像。在下一阶段,深度残差卷积神经网络ResNet-18被定制来从TF图像中提取隐藏特征。最后,通过softmax层来执行分类。为了进一步评估模型的性能,采用了各种机器学习(ML)分类器。使用两个不同的公众情绪脑电图数据集验证了该方法的可行性和有效性。实验结果表明,该方法在对唤醒、支配和效价情绪进行分类方面的最佳准确率分别为99.03、97.59和97.75%,优于最先进的情绪识别方法。我们的研究表明,使用深度残差网络的基于TF的多元EEG信号分析在人类情绪识别中取得了优异的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信