mt - effentnetv2:一种基于递归图的多时间尺度脑电情感识别方法

IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Zihan Zhang;Zhiyong Zhou;Jun Wang;Hao Hu;Jing Zhao
{"title":"mt - effentnetv2:一种基于递归图的多时间尺度脑电情感识别方法","authors":"Zihan Zhang;Zhiyong Zhou;Jun Wang;Hao Hu;Jing Zhao","doi":"10.1109/ACCESS.2025.3592336","DOIUrl":null,"url":null,"abstract":"Emotion recognition based on electroencephalography (EEG) signals has garnered significant research attention in recent years due to its potential applications in affective computing and brain-computer interfaces. Despite the proposal of various deep learning-based methods for extracting emotional features from EEG signals, most existing models struggle to effectively capture both long-term and short-term dependencies within the signals, failing to fully integrate features across different temporal scales. To address these challenges, we propose a deep learning model that combines multi-temporal-scale fusion, termed MT-EfficientNetV2. This model segments one-dimensional EEG signals using combinations of varying window sizes and fixed step lengths. The Recursive Plot (RP) algorithm is then employed to transform these segments into RGB images that intuitively represent the dynamic characteristics of the signals, facilitating the capture of complex emotional features. Additionally, a three-branch input feature fusion module has been designed to effectively integrate features across different scales within the same temporal domain. The model architecture incorporates DEconv and the SimAM attention mechanism with EfficientNetV2. This integration enhances the global fusion and expression of multi-scale features while strengthening the extraction of key emotional features at the local level, thereby suppressing redundant information. Experiments conducted on the public datasets SEED and SEED-IV yielded accuracies of 98.67% and 96.89%, respectively, surpassing current mainstream methods and validating the feasibility and effectiveness of the proposed approach.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"132079-132096"},"PeriodicalIF":3.6000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11095664","citationCount":"0","resultStr":"{\"title\":\"MT-EfficientNetV2: A Multi-Temporal Scale Fusion EEG Emotion Recognition Method Based on Recurrence Plots\",\"authors\":\"Zihan Zhang;Zhiyong Zhou;Jun Wang;Hao Hu;Jing Zhao\",\"doi\":\"10.1109/ACCESS.2025.3592336\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Emotion recognition based on electroencephalography (EEG) signals has garnered significant research attention in recent years due to its potential applications in affective computing and brain-computer interfaces. Despite the proposal of various deep learning-based methods for extracting emotional features from EEG signals, most existing models struggle to effectively capture both long-term and short-term dependencies within the signals, failing to fully integrate features across different temporal scales. To address these challenges, we propose a deep learning model that combines multi-temporal-scale fusion, termed MT-EfficientNetV2. This model segments one-dimensional EEG signals using combinations of varying window sizes and fixed step lengths. The Recursive Plot (RP) algorithm is then employed to transform these segments into RGB images that intuitively represent the dynamic characteristics of the signals, facilitating the capture of complex emotional features. Additionally, a three-branch input feature fusion module has been designed to effectively integrate features across different scales within the same temporal domain. The model architecture incorporates DEconv and the SimAM attention mechanism with EfficientNetV2. This integration enhances the global fusion and expression of multi-scale features while strengthening the extraction of key emotional features at the local level, thereby suppressing redundant information. Experiments conducted on the public datasets SEED and SEED-IV yielded accuracies of 98.67% and 96.89%, respectively, surpassing current mainstream methods and validating the feasibility and effectiveness of the proposed approach.\",\"PeriodicalId\":13079,\"journal\":{\"name\":\"IEEE Access\",\"volume\":\"13 \",\"pages\":\"132079-132096\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11095664\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Access\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11095664/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11095664/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

基于脑电图(EEG)信号的情绪识别由于其在情感计算和脑机接口方面的潜在应用,近年来引起了广泛的研究关注。尽管提出了各种基于深度学习的方法来从脑电图信号中提取情感特征,但大多数现有模型都难以有效地捕获信号中的长期和短期依赖关系,未能充分整合不同时间尺度的特征。为了应对这些挑战,我们提出了一种结合多时间尺度融合的深度学习模型,称为MT-EfficientNetV2。该模型利用不同窗口大小和固定步长的组合对一维脑电信号进行分割。然后使用递归图(Recursive Plot, RP)算法将这些片段转换成RGB图像,直观地表示信号的动态特征,便于捕捉复杂的情感特征。此外,设计了一个三分支输入特征融合模块,在同一时域内有效地集成了不同尺度的特征。模型架构将decv和SimAM注意机制与EfficientNetV2结合在一起。这种融合增强了多尺度特征的全局融合和表达,同时加强了局部关键情感特征的提取,从而抑制了冗余信息。在公共数据集SEED和SEED- iv上进行的实验,准确率分别达到98.67%和96.89%,超过了目前主流方法,验证了本文方法的可行性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MT-EfficientNetV2: A Multi-Temporal Scale Fusion EEG Emotion Recognition Method Based on Recurrence Plots
Emotion recognition based on electroencephalography (EEG) signals has garnered significant research attention in recent years due to its potential applications in affective computing and brain-computer interfaces. Despite the proposal of various deep learning-based methods for extracting emotional features from EEG signals, most existing models struggle to effectively capture both long-term and short-term dependencies within the signals, failing to fully integrate features across different temporal scales. To address these challenges, we propose a deep learning model that combines multi-temporal-scale fusion, termed MT-EfficientNetV2. This model segments one-dimensional EEG signals using combinations of varying window sizes and fixed step lengths. The Recursive Plot (RP) algorithm is then employed to transform these segments into RGB images that intuitively represent the dynamic characteristics of the signals, facilitating the capture of complex emotional features. Additionally, a three-branch input feature fusion module has been designed to effectively integrate features across different scales within the same temporal domain. The model architecture incorporates DEconv and the SimAM attention mechanism with EfficientNetV2. This integration enhances the global fusion and expression of multi-scale features while strengthening the extraction of key emotional features at the local level, thereby suppressing redundant information. Experiments conducted on the public datasets SEED and SEED-IV yielded accuracies of 98.67% and 96.89%, respectively, surpassing current mainstream methods and validating the feasibility and effectiveness of the proposed approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Access
IEEE Access COMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍: IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest. IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on: Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals. Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering. Development of new or improved fabrication or manufacturing techniques. Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信