Xinfeng Dong , Dingwen Zhang , Longfei Han , Huaxiang Zhang , Li Liu , Junwei Han
{"title":"基于clip的图像-文本匹配知识投影仪","authors":"Xinfeng Dong , Dingwen Zhang , Longfei Han , Huaxiang Zhang , Li Liu , Junwei Han","doi":"10.1016/j.ipm.2025.104357","DOIUrl":null,"url":null,"abstract":"<div><div>Image–text matching is an essential research area within multimedia research. However, images often contain richer information than text, and representing an image with only one vector can be limited to fully capture its semantics, leading to suboptimal performance in cross-modal matching tasks. To address this limitation, we propose a CLIP-based knowledge projector network that encodes an image into a set of embeddings. These embeddings capture different semantics of an image, guided by prior knowledge from the large vision-language pretrained model CLIP(Contrastive Language-Image Pre-Training). To ensure that the generated slot features stay aligned with global semantics, we design an adaptive weighted fusion module that incorporates global features into slot representations. During the test phase, we present an effective and explainable similarity calculation method compared with existing fine-grained image–text matching methods. The proposed framework’s effectiveness is evidenced by the experimental results, with performance improvements of at least 7% in R@1 on image retrieval tasks compared to CLIP on the MSCOCO and Flickr30K datasets.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 1","pages":"Article 104357"},"PeriodicalIF":6.9000,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CLIP-based knowledge projector for image–text matching\",\"authors\":\"Xinfeng Dong , Dingwen Zhang , Longfei Han , Huaxiang Zhang , Li Liu , Junwei Han\",\"doi\":\"10.1016/j.ipm.2025.104357\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Image–text matching is an essential research area within multimedia research. However, images often contain richer information than text, and representing an image with only one vector can be limited to fully capture its semantics, leading to suboptimal performance in cross-modal matching tasks. To address this limitation, we propose a CLIP-based knowledge projector network that encodes an image into a set of embeddings. These embeddings capture different semantics of an image, guided by prior knowledge from the large vision-language pretrained model CLIP(Contrastive Language-Image Pre-Training). To ensure that the generated slot features stay aligned with global semantics, we design an adaptive weighted fusion module that incorporates global features into slot representations. During the test phase, we present an effective and explainable similarity calculation method compared with existing fine-grained image–text matching methods. The proposed framework’s effectiveness is evidenced by the experimental results, with performance improvements of at least 7% in R@1 on image retrieval tasks compared to CLIP on the MSCOCO and Flickr30K datasets.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"63 1\",\"pages\":\"Article 104357\"},\"PeriodicalIF\":6.9000,\"publicationDate\":\"2025-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457325002985\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325002985","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
CLIP-based knowledge projector for image–text matching
Image–text matching is an essential research area within multimedia research. However, images often contain richer information than text, and representing an image with only one vector can be limited to fully capture its semantics, leading to suboptimal performance in cross-modal matching tasks. To address this limitation, we propose a CLIP-based knowledge projector network that encodes an image into a set of embeddings. These embeddings capture different semantics of an image, guided by prior knowledge from the large vision-language pretrained model CLIP(Contrastive Language-Image Pre-Training). To ensure that the generated slot features stay aligned with global semantics, we design an adaptive weighted fusion module that incorporates global features into slot representations. During the test phase, we present an effective and explainable similarity calculation method compared with existing fine-grained image–text matching methods. The proposed framework’s effectiveness is evidenced by the experimental results, with performance improvements of at least 7% in R@1 on image retrieval tasks compared to CLIP on the MSCOCO and Flickr30K datasets.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.