Clip4Vis: Parameter-free fusion for multimodal video recognition

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Qishi Zheng , Mengnan He , Jiuqin Duan , Gai Luo , Pengcheng Wu , Yimin Han , Qingyue Min , Peng Chen , Ping Zhang
{"title":"Clip4Vis: Parameter-free fusion for multimodal video recognition","authors":"Qishi Zheng ,&nbsp;Mengnan He ,&nbsp;Jiuqin Duan ,&nbsp;Gai Luo ,&nbsp;Pengcheng Wu ,&nbsp;Yimin Han ,&nbsp;Qingyue Min ,&nbsp;Peng Chen ,&nbsp;Ping Zhang","doi":"10.1016/j.neucom.2025.131046","DOIUrl":null,"url":null,"abstract":"<div><div>Multimodal video recognition has emerged as a central focus due to its ability to effectively integrate information from diverse modalities, such as video and text. However, traditional fusion methods typically rely on trainable parameters, resulting in increased model computational costs. To address these challenges, this paper presents <strong>Clip4Vis</strong>, a zero-parameter progressive fusion framework that combines video and text features using a shallow-to-deep approach. The shallow and deep fusion steps are implemented through two key modules: (i) <strong>Cross-Model Attention</strong>, a module that enhances video embeddings with textual information, enabling adaptive focus on keyframes to improve action representation in the video. (ii) <strong>Joint Temporal-Textual Aggregation</strong>, a module that integrates video embeddings and word embeddings by jointly utilizing temporal and textual information, enabling global information aggregation. Extensive evaluations on five widely used video datasets demonstrate that our method achieves competitive performance in general, zero-shot, and few-shot video recognition. Our best model, using the released CLIP model, achieves a state-of-the-art accuracy of 87.4 % for general recognition on Kinetics-400 and 75.3 % for zero-shot recognition on Kinetics-600. The code will be released later.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131046"},"PeriodicalIF":5.5000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225017187","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal video recognition has emerged as a central focus due to its ability to effectively integrate information from diverse modalities, such as video and text. However, traditional fusion methods typically rely on trainable parameters, resulting in increased model computational costs. To address these challenges, this paper presents Clip4Vis, a zero-parameter progressive fusion framework that combines video and text features using a shallow-to-deep approach. The shallow and deep fusion steps are implemented through two key modules: (i) Cross-Model Attention, a module that enhances video embeddings with textual information, enabling adaptive focus on keyframes to improve action representation in the video. (ii) Joint Temporal-Textual Aggregation, a module that integrates video embeddings and word embeddings by jointly utilizing temporal and textual information, enabling global information aggregation. Extensive evaluations on five widely used video datasets demonstrate that our method achieves competitive performance in general, zero-shot, and few-shot video recognition. Our best model, using the released CLIP model, achieves a state-of-the-art accuracy of 87.4 % for general recognition on Kinetics-400 and 75.3 % for zero-shot recognition on Kinetics-600. The code will be released later.
Clip4Vis:多模态视频识别的无参数融合
多模态视频识别由于能够有效地整合来自不同模态(如视频和文本)的信息而成为焦点。然而,传统的融合方法通常依赖于可训练的参数,导致模型计算成本增加。为了应对这些挑战,本文提出了Clip4Vis,这是一个零参数渐进式融合框架,使用从浅到深的方法结合了视频和文本特征。浅融合和深融合步骤是通过两个关键模块实现的:(i)跨模型注意,该模块通过文本信息增强视频嵌入,使关键帧的自适应聚焦能够改善视频中的动作表示。(ii)时文联合聚合(Joint temporal - textual Aggregation),该模块将视频嵌入和词嵌入集成在一起,共同利用时文信息,实现全局信息聚合。对五个广泛使用的视频数据集的广泛评估表明,我们的方法在一般、零镜头和少镜头视频识别方面具有竞争力。我们最好的模型,使用已发布的CLIP模型,在Kinetics-400上的一般识别达到了87.4% %的最先进精度,在Kinetics-600上的零射击识别达到了75.3 %。代码将在稍后发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信