{"title":"面向情感的跨模态提示和对齐以人为中心的情感视频字幕","authors":"Yu Wang;Yuanyuan Liu;Shunping Zhou;Yuxuan Huang;Chang Tang;Wujie Zhou;Zhe Chen","doi":"10.1109/TMM.2025.3535292","DOIUrl":null,"url":null,"abstract":"Human-centric Emotional Video Captioning (H-EVC) aims to generate fine-grained, emotion-related sentences for human-based videos, enhancing the understanding of human emotions and facilitating human-computer emotional interaction. However, existing video captioning methods often overlook subtle emotional clues and interactions in videos. As a result, the generated captions frequently lack emotional information. To address this, we propose <bold>E</b>motion-oriented <bold>C</b>ross-modal <bold>P</b>rompting and <bold>A</b>lignment (ECPA), which improves HEVC accuracy by modeling fine-grained visual-textual emotion clues. Using large foundation models, ECPA introduces two learnable prompting strategies: visual emotion prompting (VEP) and textual emotion prompting (TEP), along with an emotion-oriented cross-modal alignment (ECA) module. VEP uses two levels of visual prompts, <italic>i.e.</i>, emotion recognition (ER) and action unit (AU), to focus on both coarse and fine visual emotional features. TEP devise two-level learnable textual prompts, <italic>i.e.</i>, sentence-level emotional tokens and word-level masked tokens to capture global and local textual emotion representations. ECA introduces another two levels of emotion-oriented prompt alignment learning mechanisms: the ER-sentence level and the AU-word level alignment losses. Both enhance the model's ability to capture and integrate both global and local cross-modal emotion semantics, thereby enabling the generation of fine-grained emotional linguistic descriptions in video captioning. Experiments show ECPA significantly outperforms state-of-the-art methods on various H-EVC datasets (relative improvements of 9.98%, 5.72%, 4.46%, 24.52% on MAFW, and 12.82%, 20.27%, 4.22%, 5.01% on EmVidCap across four evaluation metrics) and supports zero-shot tasks on MSVD and MSRVTT, demonstrating strong applicability and generalization.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"3766-3780"},"PeriodicalIF":9.7000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Emotion-Oriented Cross-Modal Prompting and Alignment for Human-Centric Emotional Video Captioning\",\"authors\":\"Yu Wang;Yuanyuan Liu;Shunping Zhou;Yuxuan Huang;Chang Tang;Wujie Zhou;Zhe Chen\",\"doi\":\"10.1109/TMM.2025.3535292\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human-centric Emotional Video Captioning (H-EVC) aims to generate fine-grained, emotion-related sentences for human-based videos, enhancing the understanding of human emotions and facilitating human-computer emotional interaction. However, existing video captioning methods often overlook subtle emotional clues and interactions in videos. As a result, the generated captions frequently lack emotional information. To address this, we propose <bold>E</b>motion-oriented <bold>C</b>ross-modal <bold>P</b>rompting and <bold>A</b>lignment (ECPA), which improves HEVC accuracy by modeling fine-grained visual-textual emotion clues. Using large foundation models, ECPA introduces two learnable prompting strategies: visual emotion prompting (VEP) and textual emotion prompting (TEP), along with an emotion-oriented cross-modal alignment (ECA) module. VEP uses two levels of visual prompts, <italic>i.e.</i>, emotion recognition (ER) and action unit (AU), to focus on both coarse and fine visual emotional features. TEP devise two-level learnable textual prompts, <italic>i.e.</i>, sentence-level emotional tokens and word-level masked tokens to capture global and local textual emotion representations. ECA introduces another two levels of emotion-oriented prompt alignment learning mechanisms: the ER-sentence level and the AU-word level alignment losses. Both enhance the model's ability to capture and integrate both global and local cross-modal emotion semantics, thereby enabling the generation of fine-grained emotional linguistic descriptions in video captioning. Experiments show ECPA significantly outperforms state-of-the-art methods on various H-EVC datasets (relative improvements of 9.98%, 5.72%, 4.46%, 24.52% on MAFW, and 12.82%, 20.27%, 4.22%, 5.01% on EmVidCap across four evaluation metrics) and supports zero-shot tasks on MSVD and MSRVTT, demonstrating strong applicability and generalization.\",\"PeriodicalId\":13273,\"journal\":{\"name\":\"IEEE Transactions on Multimedia\",\"volume\":\"27 \",\"pages\":\"3766-3780\"},\"PeriodicalIF\":9.7000,\"publicationDate\":\"2025-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Multimedia\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10909571/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10909571/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Emotion-Oriented Cross-Modal Prompting and Alignment for Human-Centric Emotional Video Captioning
Human-centric Emotional Video Captioning (H-EVC) aims to generate fine-grained, emotion-related sentences for human-based videos, enhancing the understanding of human emotions and facilitating human-computer emotional interaction. However, existing video captioning methods often overlook subtle emotional clues and interactions in videos. As a result, the generated captions frequently lack emotional information. To address this, we propose Emotion-oriented Cross-modal Prompting and Alignment (ECPA), which improves HEVC accuracy by modeling fine-grained visual-textual emotion clues. Using large foundation models, ECPA introduces two learnable prompting strategies: visual emotion prompting (VEP) and textual emotion prompting (TEP), along with an emotion-oriented cross-modal alignment (ECA) module. VEP uses two levels of visual prompts, i.e., emotion recognition (ER) and action unit (AU), to focus on both coarse and fine visual emotional features. TEP devise two-level learnable textual prompts, i.e., sentence-level emotional tokens and word-level masked tokens to capture global and local textual emotion representations. ECA introduces another two levels of emotion-oriented prompt alignment learning mechanisms: the ER-sentence level and the AU-word level alignment losses. Both enhance the model's ability to capture and integrate both global and local cross-modal emotion semantics, thereby enabling the generation of fine-grained emotional linguistic descriptions in video captioning. Experiments show ECPA significantly outperforms state-of-the-art methods on various H-EVC datasets (relative improvements of 9.98%, 5.72%, 4.46%, 24.52% on MAFW, and 12.82%, 20.27%, 4.22%, 5.01% on EmVidCap across four evaluation metrics) and supports zero-shot tasks on MSVD and MSRVTT, demonstrating strong applicability and generalization.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.