{"title":"连接离散与连续:复杂情绪检测的多模式策略","authors":"Jiehui Jia, Huan Zhang, Jinhua Liang","doi":"arxiv-2409.07901","DOIUrl":null,"url":null,"abstract":"In the domain of human-computer interaction, accurately recognizing and\ninterpreting human emotions is crucial yet challenging due to the complexity\nand subtlety of emotional expressions. This study explores the potential for\ndetecting a rich and flexible range of emotions through a multimodal approach\nwhich integrates facial expressions, voice tones, and transcript from video\nclips. We propose a novel framework that maps variety of emotions in a\nthree-dimensional Valence-Arousal-Dominance (VAD) space, which could reflect\nthe fluctuations and positivity/negativity of emotions to enable a more variety\nand comprehensive representation of emotional states. We employed K-means\nclustering to transit emotions from traditional discrete categorization to a\ncontinuous labeling system and built a classifier for emotion recognition upon\nthis system. The effectiveness of the proposed model is evaluated using the\nMER2024 dataset, which contains culturally consistent video clips from Chinese\nmovies and TV series, annotated with both discrete and open-vocabulary emotion\nlabels. Our experiment successfully achieved the transformation between\ndiscrete and continuous models, and the proposed model generated a more diverse\nand comprehensive set of emotion vocabulary while maintaining strong accuracy.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"67 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Bridging Discrete and Continuous: A Multimodal Strategy for Complex Emotion Detection\",\"authors\":\"Jiehui Jia, Huan Zhang, Jinhua Liang\",\"doi\":\"arxiv-2409.07901\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the domain of human-computer interaction, accurately recognizing and\\ninterpreting human emotions is crucial yet challenging due to the complexity\\nand subtlety of emotional expressions. This study explores the potential for\\ndetecting a rich and flexible range of emotions through a multimodal approach\\nwhich integrates facial expressions, voice tones, and transcript from video\\nclips. We propose a novel framework that maps variety of emotions in a\\nthree-dimensional Valence-Arousal-Dominance (VAD) space, which could reflect\\nthe fluctuations and positivity/negativity of emotions to enable a more variety\\nand comprehensive representation of emotional states. We employed K-means\\nclustering to transit emotions from traditional discrete categorization to a\\ncontinuous labeling system and built a classifier for emotion recognition upon\\nthis system. The effectiveness of the proposed model is evaluated using the\\nMER2024 dataset, which contains culturally consistent video clips from Chinese\\nmovies and TV series, annotated with both discrete and open-vocabulary emotion\\nlabels. Our experiment successfully achieved the transformation between\\ndiscrete and continuous models, and the proposed model generated a more diverse\\nand comprehensive set of emotion vocabulary while maintaining strong accuracy.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"67 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07901\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07901","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Bridging Discrete and Continuous: A Multimodal Strategy for Complex Emotion Detection
In the domain of human-computer interaction, accurately recognizing and
interpreting human emotions is crucial yet challenging due to the complexity
and subtlety of emotional expressions. This study explores the potential for
detecting a rich and flexible range of emotions through a multimodal approach
which integrates facial expressions, voice tones, and transcript from video
clips. We propose a novel framework that maps variety of emotions in a
three-dimensional Valence-Arousal-Dominance (VAD) space, which could reflect
the fluctuations and positivity/negativity of emotions to enable a more variety
and comprehensive representation of emotional states. We employed K-means
clustering to transit emotions from traditional discrete categorization to a
continuous labeling system and built a classifier for emotion recognition upon
this system. The effectiveness of the proposed model is evaluated using the
MER2024 dataset, which contains culturally consistent video clips from Chinese
movies and TV series, annotated with both discrete and open-vocabulary emotion
labels. Our experiment successfully achieved the transformation between
discrete and continuous models, and the proposed model generated a more diverse
and comprehensive set of emotion vocabulary while maintaining strong accuracy.