Yongtang Bao , Yuzhen Wang , Yutong Qi , Qing Yang , Ruijun Liu , Liping Feng
{"title":"使用对抗对比学习的情绪辅助多模态人格识别","authors":"Yongtang Bao , Yuzhen Wang , Yutong Qi , Qing Yang , Ruijun Liu , Liping Feng","doi":"10.1016/j.knosys.2025.113504","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modal personality recognition integrates text, audio, and video information to accurately identify personality traits, offering significant value in fields like human–computer interaction. However, existing methods face feature extraction, noise removal, and modal alignment challenges. These issues impact recognition accuracy and model robustness. To address these issues, we propose an <strong>E</strong>motion-<strong>A</strong>ssisted multi-modal <strong>P</strong>ersonality <strong>R</strong>ecognition using adversarial <strong>C</strong>ontrastive learning (EAPRC). EAPRC leverages text, audio, and image data, incorporating emotional information to enhance recognition accuracy and robustness through adversarial training. The model reduces inter-modal noise using adversarial sample generation and employs joint class propagation contrastive learning to extract discriminative feature representations. For emotion-based assistance, EAPRC uses emotion feature-guided fusion and emotion score decision fusion to exploit the correlation between emotions and personality traits fully. It further improves the accuracy and stability of multi-modal personality recognition. Experimental results on the ChaLearn First Impressions and ELEA datasets demonstrate that EAPRC performs effectively, validating its capability in multi-modal personality recognition tasks.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"317 ","pages":"Article 113504"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Emotion-Assisted multi-modal Personality Recognition using adversarial Contrastive learning\",\"authors\":\"Yongtang Bao , Yuzhen Wang , Yutong Qi , Qing Yang , Ruijun Liu , Liping Feng\",\"doi\":\"10.1016/j.knosys.2025.113504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-modal personality recognition integrates text, audio, and video information to accurately identify personality traits, offering significant value in fields like human–computer interaction. However, existing methods face feature extraction, noise removal, and modal alignment challenges. These issues impact recognition accuracy and model robustness. To address these issues, we propose an <strong>E</strong>motion-<strong>A</strong>ssisted multi-modal <strong>P</strong>ersonality <strong>R</strong>ecognition using adversarial <strong>C</strong>ontrastive learning (EAPRC). EAPRC leverages text, audio, and image data, incorporating emotional information to enhance recognition accuracy and robustness through adversarial training. The model reduces inter-modal noise using adversarial sample generation and employs joint class propagation contrastive learning to extract discriminative feature representations. For emotion-based assistance, EAPRC uses emotion feature-guided fusion and emotion score decision fusion to exploit the correlation between emotions and personality traits fully. It further improves the accuracy and stability of multi-modal personality recognition. Experimental results on the ChaLearn First Impressions and ELEA datasets demonstrate that EAPRC performs effectively, validating its capability in multi-modal personality recognition tasks.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"317 \",\"pages\":\"Article 113504\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125005507\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125005507","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Emotion-Assisted multi-modal Personality Recognition using adversarial Contrastive learning
Multi-modal personality recognition integrates text, audio, and video information to accurately identify personality traits, offering significant value in fields like human–computer interaction. However, existing methods face feature extraction, noise removal, and modal alignment challenges. These issues impact recognition accuracy and model robustness. To address these issues, we propose an Emotion-Assisted multi-modal Personality Recognition using adversarial Contrastive learning (EAPRC). EAPRC leverages text, audio, and image data, incorporating emotional information to enhance recognition accuracy and robustness through adversarial training. The model reduces inter-modal noise using adversarial sample generation and employs joint class propagation contrastive learning to extract discriminative feature representations. For emotion-based assistance, EAPRC uses emotion feature-guided fusion and emotion score decision fusion to exploit the correlation between emotions and personality traits fully. It further improves the accuracy and stability of multi-modal personality recognition. Experimental results on the ChaLearn First Impressions and ELEA datasets demonstrate that EAPRC performs effectively, validating its capability in multi-modal personality recognition tasks.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.