EEG evoked automated emotion recognition using deep convolutional neural network

Abgeena Abgeena, S. Garg
{"title":"EEG evoked automated emotion recognition using deep convolutional neural network","authors":"Abgeena Abgeena, S. Garg","doi":"10.1109/ICECCT56650.2023.10179711","DOIUrl":null,"url":null,"abstract":"As life continues to change in the digital era, it is crucial to perceive a person's emotional state. Affective computing is receiving more attention with the increase in the human-computer interface (HCI). Human emotion recognition employing electroen-cephalogram (EEG) signals has been studied to obtain a person's emotional status for different stimuli. However, it is difficult to identify clear patterns in EEG signals because they have low electrical impulses and are highly sensitive to noise. A deep convolutional neural network (DCNN) was employed in the present study to recognize emotions in EEG signals. For this purpose, a publicly available dataset, DREAMER, was utilized in this study to assess the applicability of the model for emotion classification. The dataset consisted of three-dimensional emotions, that is, valence, arousal, and dominance (VAD). 2D emotions arousal and valence were the most-recognized emotions in existing research. The present study identified the 3D emotions present in the above-mentioned dataset. In this study, raw EEG signals from the DREAMER dataset were pre-processed. Subsequently, three EEG rhythms, theta, alpha, and beta, were extracted using a bandpass filter. The power spectral density (PSD) was computed using fast Fourier transform (FFT) in the feature extraction. Finally, a 1D CNN model is applied to the classification of emotions. In addition, the performance of the proposed model was compared with two machine learning (ML) classifiers: random forest (RF) and extreme Gradient Boosting (XGBoost) classifiers. The highest accuracy (ACC) of 97.6% was obtained using the proposed model in the dominance dimension. The working principles were compared and discussed to determine the suitability of the model for emotion recognition applications.","PeriodicalId":180790,"journal":{"name":"2023 Fifth International Conference on Electrical, Computer and Communication Technologies (ICECCT)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Fifth International Conference on Electrical, Computer and Communication Technologies (ICECCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECCT56650.2023.10179711","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As life continues to change in the digital era, it is crucial to perceive a person's emotional state. Affective computing is receiving more attention with the increase in the human-computer interface (HCI). Human emotion recognition employing electroen-cephalogram (EEG) signals has been studied to obtain a person's emotional status for different stimuli. However, it is difficult to identify clear patterns in EEG signals because they have low electrical impulses and are highly sensitive to noise. A deep convolutional neural network (DCNN) was employed in the present study to recognize emotions in EEG signals. For this purpose, a publicly available dataset, DREAMER, was utilized in this study to assess the applicability of the model for emotion classification. The dataset consisted of three-dimensional emotions, that is, valence, arousal, and dominance (VAD). 2D emotions arousal and valence were the most-recognized emotions in existing research. The present study identified the 3D emotions present in the above-mentioned dataset. In this study, raw EEG signals from the DREAMER dataset were pre-processed. Subsequently, three EEG rhythms, theta, alpha, and beta, were extracted using a bandpass filter. The power spectral density (PSD) was computed using fast Fourier transform (FFT) in the feature extraction. Finally, a 1D CNN model is applied to the classification of emotions. In addition, the performance of the proposed model was compared with two machine learning (ML) classifiers: random forest (RF) and extreme Gradient Boosting (XGBoost) classifiers. The highest accuracy (ACC) of 97.6% was obtained using the proposed model in the dominance dimension. The working principles were compared and discussed to determine the suitability of the model for emotion recognition applications.
脑电诱发深度卷积神经网络自动情绪识别
随着数字时代生活的不断变化,感知一个人的情绪状态是至关重要的。随着人机界面(HCI)的发展,情感计算越来越受到人们的关注。研究了利用脑电图(EEG)信号进行人类情绪识别,以获取人在不同刺激下的情绪状态。然而,由于脑电图信号具有低电脉冲和对噪声高度敏感,因此很难识别出清晰的模式。本研究采用深度卷积神经网络(DCNN)对脑电信号中的情绪进行识别。为此,本研究使用了一个公开可用的数据集,dream,来评估该模型在情绪分类中的适用性。数据集由三维情绪组成,即效价、唤醒和支配(VAD)。二维情绪的唤醒和效价是现有研究中最容易识别的情绪。本研究确定了上述数据集中存在的3D情绪。在本研究中,对来自做梦者数据集的原始EEG信号进行了预处理。随后,使用带通滤波器提取三种EEG节律,theta, alpha和beta。在特征提取中,采用快速傅里叶变换(FFT)计算功率谱密度(PSD)。最后,将一维CNN模型应用于情绪分类。此外,该模型的性能与两种机器学习(ML)分类器:随机森林(RF)和极端梯度增强(XGBoost)分类器进行了比较。该模型在优势度维度上的准确率最高,达到97.6%。对工作原理进行了比较和讨论,以确定该模型在情感识别应用中的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信