{"title":"Altering Query Prompting With Contrastive Learning for Multimodal Intent Recognition","authors":"Yuxin Jia;Xueping Wang;Zhanpeng Shao;Min Liu","doi":"10.1109/LSP.2025.3599107","DOIUrl":null,"url":null,"abstract":"Multimodal intent recognition utilizes heterogeneous modalities such as visual, auditory, and textual cues to infer user intent, serving as a pivotal component in human-machine interaction. Existing approaches, however, often rely on unimodal paradigms or shallow multimodal fusion, failing to model cross-modal semantic dependencies and struggling to extract discriminative features from non-verbal modalities, limiting their robustness in complex scenarios. To mitigate these limitations, we propose an Altering Query Prompting with Contrastive Learning framework (AQP-CL) that dynamically aligns and refines multimodal representations. Specifically, the Altering Query Prompting (AQP) module introduces a tri-modality rotation attention mechanism, where textual, visual, and acoustic modalities cyclically alternate as queries in cross-attention operations. This approach addresses modality bias while strengthening interdependencies between modalities, ultimately yielding intent-aware fused feature representations that preserve discriminative cues. The Label-semantic Augmented Contrastive Learning (LACL) strategy generates augmented samples through the intent-aware query prompt and enhances feature discrimination via NT-Xent loss on label tokens. By integrating high-confidence textual semantics from intent labels, LACL refines auxiliary modality features through contrastive alignment, ensuring robust cross-modal representation learning. Evaluations on IEMOCAP and MIntRec validate AQP-CL’s superiority, achieving state-of-the-art precision of 77.78% on IEMOCAP, a 3.41% improvement over existing methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3345-3349"},"PeriodicalIF":3.9000,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11125866/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal intent recognition utilizes heterogeneous modalities such as visual, auditory, and textual cues to infer user intent, serving as a pivotal component in human-machine interaction. Existing approaches, however, often rely on unimodal paradigms or shallow multimodal fusion, failing to model cross-modal semantic dependencies and struggling to extract discriminative features from non-verbal modalities, limiting their robustness in complex scenarios. To mitigate these limitations, we propose an Altering Query Prompting with Contrastive Learning framework (AQP-CL) that dynamically aligns and refines multimodal representations. Specifically, the Altering Query Prompting (AQP) module introduces a tri-modality rotation attention mechanism, where textual, visual, and acoustic modalities cyclically alternate as queries in cross-attention operations. This approach addresses modality bias while strengthening interdependencies between modalities, ultimately yielding intent-aware fused feature representations that preserve discriminative cues. The Label-semantic Augmented Contrastive Learning (LACL) strategy generates augmented samples through the intent-aware query prompt and enhances feature discrimination via NT-Xent loss on label tokens. By integrating high-confidence textual semantics from intent labels, LACL refines auxiliary modality features through contrastive alignment, ensuring robust cross-modal representation learning. Evaluations on IEMOCAP and MIntRec validate AQP-CL’s superiority, achieving state-of-the-art precision of 77.78% on IEMOCAP, a 3.41% improvement over existing methods.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.