使用对抗对比学习的情绪辅助多模态人格识别

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yongtang Bao , Yuzhen Wang , Yutong Qi , Qing Yang , Ruijun Liu , Liping Feng
{"title":"使用对抗对比学习的情绪辅助多模态人格识别","authors":"Yongtang Bao ,&nbsp;Yuzhen Wang ,&nbsp;Yutong Qi ,&nbsp;Qing Yang ,&nbsp;Ruijun Liu ,&nbsp;Liping Feng","doi":"10.1016/j.knosys.2025.113504","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modal personality recognition integrates text, audio, and video information to accurately identify personality traits, offering significant value in fields like human–computer interaction. However, existing methods face feature extraction, noise removal, and modal alignment challenges. These issues impact recognition accuracy and model robustness. To address these issues, we propose an <strong>E</strong>motion-<strong>A</strong>ssisted multi-modal <strong>P</strong>ersonality <strong>R</strong>ecognition using adversarial <strong>C</strong>ontrastive learning (EAPRC). EAPRC leverages text, audio, and image data, incorporating emotional information to enhance recognition accuracy and robustness through adversarial training. The model reduces inter-modal noise using adversarial sample generation and employs joint class propagation contrastive learning to extract discriminative feature representations. For emotion-based assistance, EAPRC uses emotion feature-guided fusion and emotion score decision fusion to exploit the correlation between emotions and personality traits fully. It further improves the accuracy and stability of multi-modal personality recognition. Experimental results on the ChaLearn First Impressions and ELEA datasets demonstrate that EAPRC performs effectively, validating its capability in multi-modal personality recognition tasks.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"317 ","pages":"Article 113504"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Emotion-Assisted multi-modal Personality Recognition using adversarial Contrastive learning\",\"authors\":\"Yongtang Bao ,&nbsp;Yuzhen Wang ,&nbsp;Yutong Qi ,&nbsp;Qing Yang ,&nbsp;Ruijun Liu ,&nbsp;Liping Feng\",\"doi\":\"10.1016/j.knosys.2025.113504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-modal personality recognition integrates text, audio, and video information to accurately identify personality traits, offering significant value in fields like human–computer interaction. However, existing methods face feature extraction, noise removal, and modal alignment challenges. These issues impact recognition accuracy and model robustness. To address these issues, we propose an <strong>E</strong>motion-<strong>A</strong>ssisted multi-modal <strong>P</strong>ersonality <strong>R</strong>ecognition using adversarial <strong>C</strong>ontrastive learning (EAPRC). EAPRC leverages text, audio, and image data, incorporating emotional information to enhance recognition accuracy and robustness through adversarial training. The model reduces inter-modal noise using adversarial sample generation and employs joint class propagation contrastive learning to extract discriminative feature representations. For emotion-based assistance, EAPRC uses emotion feature-guided fusion and emotion score decision fusion to exploit the correlation between emotions and personality traits fully. It further improves the accuracy and stability of multi-modal personality recognition. Experimental results on the ChaLearn First Impressions and ELEA datasets demonstrate that EAPRC performs effectively, validating its capability in multi-modal personality recognition tasks.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"317 \",\"pages\":\"Article 113504\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125005507\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125005507","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多模态人格识别集成文本、音频和视频信息,准确识别人格特征,在人机交互等领域具有重要价值。然而,现有的方法面临着特征提取、噪声去除和模态对齐的挑战。这些问题影响识别的准确性和模型的鲁棒性。为了解决这些问题,我们提出了一种使用对抗对比学习(EAPRC)的情绪辅助多模态人格识别方法。EAPRC利用文本、音频和图像数据,结合情感信息,通过对抗性训练提高识别的准确性和稳健性。该模型使用对抗性样本生成来减少模态间噪声,并使用联合类传播对比学习来提取判别特征表示。对于基于情绪的援助,EAPRC采用情绪特征引导融合和情绪评分决策融合,充分挖掘情绪与人格特质之间的相关性。进一步提高了多模态人格识别的准确性和稳定性。在ChaLearn第一印象和ELEA数据集上的实验结果表明,EAPRC在多模态人格识别任务中的性能得到了验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Emotion-Assisted multi-modal Personality Recognition using adversarial Contrastive learning

Emotion-Assisted multi-modal Personality Recognition using adversarial Contrastive learning
Multi-modal personality recognition integrates text, audio, and video information to accurately identify personality traits, offering significant value in fields like human–computer interaction. However, existing methods face feature extraction, noise removal, and modal alignment challenges. These issues impact recognition accuracy and model robustness. To address these issues, we propose an Emotion-Assisted multi-modal Personality Recognition using adversarial Contrastive learning (EAPRC). EAPRC leverages text, audio, and image data, incorporating emotional information to enhance recognition accuracy and robustness through adversarial training. The model reduces inter-modal noise using adversarial sample generation and employs joint class propagation contrastive learning to extract discriminative feature representations. For emotion-based assistance, EAPRC uses emotion feature-guided fusion and emotion score decision fusion to exploit the correlation between emotions and personality traits fully. It further improves the accuracy and stability of multi-modal personality recognition. Experimental results on the ChaLearn First Impressions and ELEA datasets demonstrate that EAPRC performs effectively, validating its capability in multi-modal personality recognition tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信