Classification of Emotion Based on Electroencephalogram Using Convolutional Neural Networks and Recurrent Neural Networks

S. Sundari, E. C. Djamal, Arlisa Wulandari
{"title":"Classification of Emotion Based on Electroencephalogram Using Convolutional Neural Networks and Recurrent Neural Networks","authors":"S. Sundari, E. C. Djamal, Arlisa Wulandari","doi":"10.1109/ICA52848.2021.9625702","DOIUrl":null,"url":null,"abstract":"Emotional recognition is widely used in various fields, one of which monitors emotions in understanding the processes that occur in each individual. Electroencephalogram (EEG) can capture emotional information objectively. Nevertheless, it needs appropriate processing. The multichannel of EEG gives much information that yields redundancies. Thus, it can view the information from the multichannel spatially and the sequence between signals as temporal in processing. Several methods are often used, such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). The combination of CNN and RNN utilizes a large amount of information from multi-channel and reserves the characteristics of RNN for processing sequence data such as EEG signals. So that it can maintain the signal sequence information of each channel, this paper proposed the CNN-RNN to identify three positive, negative, and neutral classes of emotions. The EEG signal was filtered at a frequency of 4-45 Hz, according to the signal characteristics of the three emotion classes. It used Wavelet to get Theta, Alpha, Beta, and Gamma waves. The results showed that the use of CNN as multi-channel handling could increase the accuracy, from 91.98%, compared to 79.52% with only RNN. On the other side, CNN-RNN can provide a shorter computation time. However, the choice of emotional duration is significant. Experiments suggest that a second represents more emotional change than five seconds.","PeriodicalId":287945,"journal":{"name":"2021 International Conference on Instrumentation, Control, and Automation (ICA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Instrumentation, Control, and Automation (ICA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICA52848.2021.9625702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Emotional recognition is widely used in various fields, one of which monitors emotions in understanding the processes that occur in each individual. Electroencephalogram (EEG) can capture emotional information objectively. Nevertheless, it needs appropriate processing. The multichannel of EEG gives much information that yields redundancies. Thus, it can view the information from the multichannel spatially and the sequence between signals as temporal in processing. Several methods are often used, such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). The combination of CNN and RNN utilizes a large amount of information from multi-channel and reserves the characteristics of RNN for processing sequence data such as EEG signals. So that it can maintain the signal sequence information of each channel, this paper proposed the CNN-RNN to identify three positive, negative, and neutral classes of emotions. The EEG signal was filtered at a frequency of 4-45 Hz, according to the signal characteristics of the three emotion classes. It used Wavelet to get Theta, Alpha, Beta, and Gamma waves. The results showed that the use of CNN as multi-channel handling could increase the accuracy, from 91.98%, compared to 79.52% with only RNN. On the other side, CNN-RNN can provide a shorter computation time. However, the choice of emotional duration is significant. Experiments suggest that a second represents more emotional change than five seconds.
基于脑电图的卷积神经网络和递归神经网络情绪分类
情绪识别被广泛应用于各个领域,其中一个领域是通过监测情绪来理解每个个体发生的过程。脑电图(EEG)可以客观地捕捉情绪信息。然而,它需要适当的处理。脑电图的多通道信息多,产生冗余。因此,它可以将来自多通道的信息视为空间信息,而将信号之间的序列视为时间信息。常用的方法有卷积神经网络(CNN)和递归神经网络(RNN)。CNN与RNN的结合利用了多通道的大量信息,并保留了RNN处理脑电信号等序列数据的特点。为了保持每个通道的信号序列信息,本文提出了CNN-RNN识别积极、消极和中性三种情绪。根据三种情绪类别的信号特征,对EEG信号进行4 ~ 45hz的频率滤波。它使用小波来得到Theta, Alpha, Beta和Gamma波。结果表明,使用CNN作为多通道处理可以提高准确率,从91.98%提高到仅使用RNN的79.52%。另一方面,CNN-RNN可以提供更短的计算时间。然而,情绪持续时间的选择是重要的。实验表明,一秒比五秒更能代表情绪变化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信