{"title":"Spatiotemporal Emotion Recognition Method Based on EEG Signals During Music Listening Using 1D-CNN & Stacked-LSTM","authors":"Shengli Liao, Yumei Zhang, Honghong Yang, Xuening Liao","doi":"10.1109/NaNA56854.2022.00009","DOIUrl":null,"url":null,"abstract":"Recognizing people's emotions accurately can help to improve people's feeling of happiness by adjusting their emotion immediately, which makes emotion recognition an active research topic recently. Electroencephalography (EEG) signals, which are electrical response of the human brain scalp, reflecting people's emotions and psychological activities, can be applied as an important tool for the emotion recognition. This paper focuses on the emotion recognition based on EEG signals during music listening. To this end, we first propose an emotion recognition scheme by combining the one-dimensional convolutional neural network (1D-CNN) and the stacked long short term memory (Stacked-LSTM), where the 1D-CNN is exploited to extract spatial features from EEG signals automatically and the Stacked-LSTM is applied for further temporal features extraction. We then conducted lots of experiments to validate the efficiency of our proposed scheme regarding the accuracy of emotion recognition. Finally, a comparison between our proposed scheme and other commonly methods used for emotion recognition based EEG signals (e.g., EEGNet, 1D-CNN, LSTM and SVM). The experimental results showed that our proposed scheme is feasible and outperform other commonly used methods in terms of classification accuracy.","PeriodicalId":113743,"journal":{"name":"2022 International Conference on Networking and Network Applications (NaNA)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Networking and Network Applications (NaNA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NaNA56854.2022.00009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recognizing people's emotions accurately can help to improve people's feeling of happiness by adjusting their emotion immediately, which makes emotion recognition an active research topic recently. Electroencephalography (EEG) signals, which are electrical response of the human brain scalp, reflecting people's emotions and psychological activities, can be applied as an important tool for the emotion recognition. This paper focuses on the emotion recognition based on EEG signals during music listening. To this end, we first propose an emotion recognition scheme by combining the one-dimensional convolutional neural network (1D-CNN) and the stacked long short term memory (Stacked-LSTM), where the 1D-CNN is exploited to extract spatial features from EEG signals automatically and the Stacked-LSTM is applied for further temporal features extraction. We then conducted lots of experiments to validate the efficiency of our proposed scheme regarding the accuracy of emotion recognition. Finally, a comparison between our proposed scheme and other commonly methods used for emotion recognition based EEG signals (e.g., EEGNet, 1D-CNN, LSTM and SVM). The experimental results showed that our proposed scheme is feasible and outperform other commonly used methods in terms of classification accuracy.