基于脑电图的情感识别的时间和通道转换器

Yanling Liu, Yueying Zhou, Daoqiang Zhang
{"title":"基于脑电图的情感识别的时间和通道转换器","authors":"Yanling Liu, Yueying Zhou, Daoqiang Zhang","doi":"10.1109/CBMS55023.2022.00072","DOIUrl":null,"url":null,"abstract":"In recent years, Electroencephalogram (EEG)-based emotion recognition has developed rapidly and gained increasing attention in the field of brain-computer interface. Relevant studies in the neuroscience domain have shown that various emotional states may activate differently in brain regions and time points. Though the EEG signals have the characteristics of high temporal resolution and strong global correlation, the low signal-to-noise ratio and much redundant information bring challenges to the fast emotion recognition. To cope with the above problem, we propose a Temporal and channel Transformer (TcT) model for emotion recognition, which is directly applied to the raw preprocessed EEG data. In the model, we propose a TcT self-attention mechanism that simultaneously captures temporal and channel dependencies. The sliding window weight sharing strategy is designed to gradually refine the features from coarse time granularity, and reduce the complexity of the attention calculation. The original signal is passed between layers through the residual structure to integrate the features of different layers. We conduct experiments on the DEAP database to verify the effectiveness of the proposed model. The results show that the model achieves better classification performance in less time and with fewer resources than state-of-the-art methods.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"TcT: Temporal and channel Transformer for EEG-based Emotion Recognition\",\"authors\":\"Yanling Liu, Yueying Zhou, Daoqiang Zhang\",\"doi\":\"10.1109/CBMS55023.2022.00072\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, Electroencephalogram (EEG)-based emotion recognition has developed rapidly and gained increasing attention in the field of brain-computer interface. Relevant studies in the neuroscience domain have shown that various emotional states may activate differently in brain regions and time points. Though the EEG signals have the characteristics of high temporal resolution and strong global correlation, the low signal-to-noise ratio and much redundant information bring challenges to the fast emotion recognition. To cope with the above problem, we propose a Temporal and channel Transformer (TcT) model for emotion recognition, which is directly applied to the raw preprocessed EEG data. In the model, we propose a TcT self-attention mechanism that simultaneously captures temporal and channel dependencies. The sliding window weight sharing strategy is designed to gradually refine the features from coarse time granularity, and reduce the complexity of the attention calculation. The original signal is passed between layers through the residual structure to integrate the features of different layers. We conduct experiments on the DEAP database to verify the effectiveness of the proposed model. The results show that the model achieves better classification performance in less time and with fewer resources than state-of-the-art methods.\",\"PeriodicalId\":218475,\"journal\":{\"name\":\"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CBMS55023.2022.00072\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CBMS55023.2022.00072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

近年来,基于脑电图(EEG)的情绪识别技术发展迅速,在脑机接口领域受到越来越多的关注。神经科学领域的相关研究表明,不同的情绪状态在大脑区域和时间点的激活可能不同。虽然脑电信号具有高时间分辨率和强全局相关性的特点,但其低信噪比和大量冗余信息给快速情绪识别带来了挑战。为了解决上述问题,我们提出了一种用于情绪识别的时间和通道变换(TcT)模型,该模型直接应用于原始预处理的脑电数据。在模型中,我们提出了一种同时捕获时间和通道依赖性的TcT自注意机制。滑动窗口权值共享策略旨在从粗时间粒度逐步细化特征,降低注意力计算的复杂度。原始信号通过残差结构在层与层之间传递,以整合不同层的特征。我们在DEAP数据库上进行了实验,以验证所提出模型的有效性。结果表明,该模型在更短的时间和更少的资源下获得了比现有方法更好的分类性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TcT: Temporal and channel Transformer for EEG-based Emotion Recognition
In recent years, Electroencephalogram (EEG)-based emotion recognition has developed rapidly and gained increasing attention in the field of brain-computer interface. Relevant studies in the neuroscience domain have shown that various emotional states may activate differently in brain regions and time points. Though the EEG signals have the characteristics of high temporal resolution and strong global correlation, the low signal-to-noise ratio and much redundant information bring challenges to the fast emotion recognition. To cope with the above problem, we propose a Temporal and channel Transformer (TcT) model for emotion recognition, which is directly applied to the raw preprocessed EEG data. In the model, we propose a TcT self-attention mechanism that simultaneously captures temporal and channel dependencies. The sliding window weight sharing strategy is designed to gradually refine the features from coarse time granularity, and reduce the complexity of the attention calculation. The original signal is passed between layers through the residual structure to integrate the features of different layers. We conduct experiments on the DEAP database to verify the effectiveness of the proposed model. The results show that the model achieves better classification performance in less time and with fewer resources than state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信