基于clip的图像-文本匹配知识投影仪

IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xinfeng Dong , Dingwen Zhang , Longfei Han , Huaxiang Zhang , Li Liu , Junwei Han
{"title":"基于clip的图像-文本匹配知识投影仪","authors":"Xinfeng Dong ,&nbsp;Dingwen Zhang ,&nbsp;Longfei Han ,&nbsp;Huaxiang Zhang ,&nbsp;Li Liu ,&nbsp;Junwei Han","doi":"10.1016/j.ipm.2025.104357","DOIUrl":null,"url":null,"abstract":"<div><div>Image–text matching is an essential research area within multimedia research. However, images often contain richer information than text, and representing an image with only one vector can be limited to fully capture its semantics, leading to suboptimal performance in cross-modal matching tasks. To address this limitation, we propose a CLIP-based knowledge projector network that encodes an image into a set of embeddings. These embeddings capture different semantics of an image, guided by prior knowledge from the large vision-language pretrained model CLIP(Contrastive Language-Image Pre-Training). To ensure that the generated slot features stay aligned with global semantics, we design an adaptive weighted fusion module that incorporates global features into slot representations. During the test phase, we present an effective and explainable similarity calculation method compared with existing fine-grained image–text matching methods. The proposed framework’s effectiveness is evidenced by the experimental results, with performance improvements of at least 7% in R@1 on image retrieval tasks compared to CLIP on the MSCOCO and Flickr30K datasets.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 1","pages":"Article 104357"},"PeriodicalIF":6.9000,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CLIP-based knowledge projector for image–text matching\",\"authors\":\"Xinfeng Dong ,&nbsp;Dingwen Zhang ,&nbsp;Longfei Han ,&nbsp;Huaxiang Zhang ,&nbsp;Li Liu ,&nbsp;Junwei Han\",\"doi\":\"10.1016/j.ipm.2025.104357\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Image–text matching is an essential research area within multimedia research. However, images often contain richer information than text, and representing an image with only one vector can be limited to fully capture its semantics, leading to suboptimal performance in cross-modal matching tasks. To address this limitation, we propose a CLIP-based knowledge projector network that encodes an image into a set of embeddings. These embeddings capture different semantics of an image, guided by prior knowledge from the large vision-language pretrained model CLIP(Contrastive Language-Image Pre-Training). To ensure that the generated slot features stay aligned with global semantics, we design an adaptive weighted fusion module that incorporates global features into slot representations. During the test phase, we present an effective and explainable similarity calculation method compared with existing fine-grained image–text matching methods. The proposed framework’s effectiveness is evidenced by the experimental results, with performance improvements of at least 7% in R@1 on image retrieval tasks compared to CLIP on the MSCOCO and Flickr30K datasets.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"63 1\",\"pages\":\"Article 104357\"},\"PeriodicalIF\":6.9000,\"publicationDate\":\"2025-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457325002985\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325002985","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

图像-文本匹配是多媒体研究中的一个重要研究领域。然而,图像通常包含比文本更丰富的信息,并且仅用一个向量表示图像可能会受到限制,无法完全捕获其语义,从而导致跨模态匹配任务中的次优性能。为了解决这一限制,我们提出了一个基于clip的知识投影网络,该网络将图像编码为一组嵌入。这些嵌入捕获图像的不同语义,由来自大型视觉语言预训练模型CLIP(对比语言-图像预训练)的先验知识指导。为了确保生成的槽位特征与全局语义保持一致,我们设计了一个自适应加权融合模块,将全局特征融合到槽位表示中。在测试阶段,与现有的细粒度图像-文本匹配方法相比,我们提出了一种有效且可解释的相似度计算方法。实验结果证明了该框架的有效性,与MSCOCO和Flickr30K数据集上的CLIP相比,R@1在图像检索任务上的性能提高了至少7%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

CLIP-based knowledge projector for image–text matching

CLIP-based knowledge projector for image–text matching
Image–text matching is an essential research area within multimedia research. However, images often contain richer information than text, and representing an image with only one vector can be limited to fully capture its semantics, leading to suboptimal performance in cross-modal matching tasks. To address this limitation, we propose a CLIP-based knowledge projector network that encodes an image into a set of embeddings. These embeddings capture different semantics of an image, guided by prior knowledge from the large vision-language pretrained model CLIP(Contrastive Language-Image Pre-Training). To ensure that the generated slot features stay aligned with global semantics, we design an adaptive weighted fusion module that incorporates global features into slot representations. During the test phase, we present an effective and explainable similarity calculation method compared with existing fine-grained image–text matching methods. The proposed framework’s effectiveness is evidenced by the experimental results, with performance improvements of at least 7% in R@1 on image retrieval tasks compared to CLIP on the MSCOCO and Flickr30K datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信