SEED-VII:带有连续标签的六种基本情绪多模态数据集,用于情绪识别

IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Wei-Bang Jiang;Xuan-Hao Liu;Wei-Long Zheng;Bao-Liang Lu
{"title":"SEED-VII:带有连续标签的六种基本情绪多模态数据集,用于情绪识别","authors":"Wei-Bang Jiang;Xuan-Hao Liu;Wei-Long Zheng;Bao-Liang Lu","doi":"10.1109/TAFFC.2024.3485057","DOIUrl":null,"url":null,"abstract":"Recognizing emotions from physiological signals is a topic that has garnered widespread interest, and research continues to develop novel techniques for perceiving emotions. However, the emergence of deep learning has highlighted the need for comprehensive and high-quality emotional datasets that enable the accurate decoding of human emotions. To systematically explore human emotions, we develop a multimodal dataset consisting of six basic (happiness, sadness, fear, disgust, surprise, and anger) emotions and the neutral emotion, named SEED-VII. This multimodal dataset includes electroencephalography (EEG) and eye movement signals. The seven emotions in SEED-VII are elicited by 80 different videos and fully investigated with continuous labels that indicate the intensity levels of the corresponding emotions. Additionally, we propose a novel Multimodal Adaptive Emotion Transformer (MAET), that can flexibly process both unimodal and multimodal inputs. Adversarial training is utilized in the MAET to mitigate subject discrepancies, which enhances domain generalization. Our extensive experiments, encompassing both subject-dependent and cross-subject conditions, demonstrate the superior performance of the MAET in terms of handling various inputs. Continuous labels are used to filter the data with high emotional intensity, and this strategy is proven to be effective for attaining improved emotion recognition performance. Furthermore, complementary properties between the EEG signals and eye movements and stable neural patterns of the seven emotions are observed.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 2","pages":"969-985"},"PeriodicalIF":9.6000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SEED-VII: A Multimodal Dataset of Six Basic Emotions With Continuous Labels for Emotion Recognition\",\"authors\":\"Wei-Bang Jiang;Xuan-Hao Liu;Wei-Long Zheng;Bao-Liang Lu\",\"doi\":\"10.1109/TAFFC.2024.3485057\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recognizing emotions from physiological signals is a topic that has garnered widespread interest, and research continues to develop novel techniques for perceiving emotions. However, the emergence of deep learning has highlighted the need for comprehensive and high-quality emotional datasets that enable the accurate decoding of human emotions. To systematically explore human emotions, we develop a multimodal dataset consisting of six basic (happiness, sadness, fear, disgust, surprise, and anger) emotions and the neutral emotion, named SEED-VII. This multimodal dataset includes electroencephalography (EEG) and eye movement signals. The seven emotions in SEED-VII are elicited by 80 different videos and fully investigated with continuous labels that indicate the intensity levels of the corresponding emotions. Additionally, we propose a novel Multimodal Adaptive Emotion Transformer (MAET), that can flexibly process both unimodal and multimodal inputs. Adversarial training is utilized in the MAET to mitigate subject discrepancies, which enhances domain generalization. Our extensive experiments, encompassing both subject-dependent and cross-subject conditions, demonstrate the superior performance of the MAET in terms of handling various inputs. Continuous labels are used to filter the data with high emotional intensity, and this strategy is proven to be effective for attaining improved emotion recognition performance. Furthermore, complementary properties between the EEG signals and eye movements and stable neural patterns of the seven emotions are observed.\",\"PeriodicalId\":13131,\"journal\":{\"name\":\"IEEE Transactions on Affective Computing\",\"volume\":\"16 2\",\"pages\":\"969-985\"},\"PeriodicalIF\":9.6000,\"publicationDate\":\"2024-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Affective Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10731546/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10731546/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

从生理信号中识别情绪是一个引起广泛兴趣的话题,研究还在继续开发新的情绪感知技术。然而,深度学习的出现凸显了对全面、高质量情感数据集的需求,这些数据集能够准确解码人类情感。为了系统地探索人类情绪,我们开发了一个多模态数据集,由六种基本情绪(快乐、悲伤、恐惧、厌恶、惊讶和愤怒)和中性情绪组成,名为SEED-VII。这个多模态数据集包括脑电图(EEG)和眼动信号。SEED-VII中的七种情绪是由80个不同的视频引发的,并通过连续的标签进行了充分的调查,这些标签表明了相应情绪的强度水平。此外,我们提出了一种新的多模态自适应情绪转换器(MAET),它可以灵活地处理单模态和多模态输入。在MAET中使用对抗训练来减少主题差异,从而增强领域泛化。我们广泛的实验,包括科目依赖和交叉科目条件,证明了MAET在处理各种输入方面的优越性能。使用连续标签对情绪强度高的数据进行过滤,该策略被证明是提高情绪识别性能的有效方法。此外,脑电图信号与眼动之间的互补性以及七种情绪的稳定神经模式也被观察到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SEED-VII: A Multimodal Dataset of Six Basic Emotions With Continuous Labels for Emotion Recognition
Recognizing emotions from physiological signals is a topic that has garnered widespread interest, and research continues to develop novel techniques for perceiving emotions. However, the emergence of deep learning has highlighted the need for comprehensive and high-quality emotional datasets that enable the accurate decoding of human emotions. To systematically explore human emotions, we develop a multimodal dataset consisting of six basic (happiness, sadness, fear, disgust, surprise, and anger) emotions and the neutral emotion, named SEED-VII. This multimodal dataset includes electroencephalography (EEG) and eye movement signals. The seven emotions in SEED-VII are elicited by 80 different videos and fully investigated with continuous labels that indicate the intensity levels of the corresponding emotions. Additionally, we propose a novel Multimodal Adaptive Emotion Transformer (MAET), that can flexibly process both unimodal and multimodal inputs. Adversarial training is utilized in the MAET to mitigate subject discrepancies, which enhances domain generalization. Our extensive experiments, encompassing both subject-dependent and cross-subject conditions, demonstrate the superior performance of the MAET in terms of handling various inputs. Continuous labels are used to filter the data with high emotional intensity, and this strategy is proven to be effective for attaining improved emotion recognition performance. Furthermore, complementary properties between the EEG signals and eye movements and stable neural patterns of the seven emotions are observed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信