Xianxun Zhu , Yaoyang Wang , Erik Cambria , Imad Rida , José Santamaría López , Lin Cui , Rui Wang
{"title":"会话情境下基于扩散和转换的鲁棒多模态情绪识别","authors":"Xianxun Zhu , Yaoyang Wang , Erik Cambria , Imad Rida , José Santamaría López , Lin Cui , Rui Wang","doi":"10.1016/j.inffus.2025.103268","DOIUrl":null,"url":null,"abstract":"<div><div>As the digital age advances, multimodal emotion recognition (MER) technology is increasingly crucial in fields like smart interaction and mental health assessment. However, emotional recognition in conversational contexts faces numerous challenges, particularly in effectively managing missing multimodal data. To address this issue, we propose RMER-DT (<strong>R</strong>obust <strong>M</strong>ultimodal <strong>E</strong>motion <strong>R</strong>ecognition in Conversational Contexts based on <strong>D</strong>iffusion and <strong>T</strong>ransformers), a novel MER model specifically designed for accurate emotion recognition in conversational environments while addressing the issue of random modality absence. To improve contextual dialogue-based multimodal emotion recognition, RMER-DT introduces a novel data recovery strategy and an optimized framework. By integrating diffusion models and transformer technologies, our model effectively recovers and integrates various modal data, such as audio, facial expressions, and text. Furthermore, RMER-DT enhances the semantic interaction and representation between modalities through the introduction of positional embeddings and speaker embeddings. Experimental results on the MELD and IEMOCAP datasets demonstrate significant advantages of RMER-DT over existing technologies in handling MER tasks and enhancing the accuracy and robustness of emotion recognition.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103268"},"PeriodicalIF":14.7000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RMER-DT: Robust multimodal emotion recognition in conversational contexts based on diffusion and transformers\",\"authors\":\"Xianxun Zhu , Yaoyang Wang , Erik Cambria , Imad Rida , José Santamaría López , Lin Cui , Rui Wang\",\"doi\":\"10.1016/j.inffus.2025.103268\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As the digital age advances, multimodal emotion recognition (MER) technology is increasingly crucial in fields like smart interaction and mental health assessment. However, emotional recognition in conversational contexts faces numerous challenges, particularly in effectively managing missing multimodal data. To address this issue, we propose RMER-DT (<strong>R</strong>obust <strong>M</strong>ultimodal <strong>E</strong>motion <strong>R</strong>ecognition in Conversational Contexts based on <strong>D</strong>iffusion and <strong>T</strong>ransformers), a novel MER model specifically designed for accurate emotion recognition in conversational environments while addressing the issue of random modality absence. To improve contextual dialogue-based multimodal emotion recognition, RMER-DT introduces a novel data recovery strategy and an optimized framework. By integrating diffusion models and transformer technologies, our model effectively recovers and integrates various modal data, such as audio, facial expressions, and text. Furthermore, RMER-DT enhances the semantic interaction and representation between modalities through the introduction of positional embeddings and speaker embeddings. Experimental results on the MELD and IEMOCAP datasets demonstrate significant advantages of RMER-DT over existing technologies in handling MER tasks and enhancing the accuracy and robustness of emotion recognition.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"123 \",\"pages\":\"Article 103268\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2025-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525003410\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525003410","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
RMER-DT: Robust multimodal emotion recognition in conversational contexts based on diffusion and transformers
As the digital age advances, multimodal emotion recognition (MER) technology is increasingly crucial in fields like smart interaction and mental health assessment. However, emotional recognition in conversational contexts faces numerous challenges, particularly in effectively managing missing multimodal data. To address this issue, we propose RMER-DT (Robust Multimodal Emotion Recognition in Conversational Contexts based on Diffusion and Transformers), a novel MER model specifically designed for accurate emotion recognition in conversational environments while addressing the issue of random modality absence. To improve contextual dialogue-based multimodal emotion recognition, RMER-DT introduces a novel data recovery strategy and an optimized framework. By integrating diffusion models and transformer technologies, our model effectively recovers and integrates various modal data, such as audio, facial expressions, and text. Furthermore, RMER-DT enhances the semantic interaction and representation between modalities through the introduction of positional embeddings and speaker embeddings. Experimental results on the MELD and IEMOCAP datasets demonstrate significant advantages of RMER-DT over existing technologies in handling MER tasks and enhancing the accuracy and robustness of emotion recognition.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.