{"title":"ClipCap+ +: An efficient image captioning approach via image encoder optimization and LLM fine-tuning","authors":"Ruiqin Wang , Ye Wu , Zhenzhen Sheng","doi":"10.1016/j.asoc.2025.113469","DOIUrl":null,"url":null,"abstract":"<div><div>ClipCap (CLIP prefix for image captioning), a leading image captioning model, exhibits limitations in recognizing images within specific domains. This study presents ClipCap+ +, an enhanced version of ClipCap that integrates key-value pair and residual connection modules. The key-value pair module implements a few-shot learning strategy by incorporating domain-specific knowledge, thereby improving the model's capability to recognize specialized image categories. The residual connection module optimizes the weight distribution between the pre-trained model and the key-value pair module, enhancing the model's transfer learning performance. During the inference phase, the model processes an input image through a multi-stage pipeline: (1) the visual encoder extracts image features to generate a hard visual prompt, (2) the key-value pair module dynamically constructs a domain-specific soft prompt, and (3) these complementary prompts are jointly fed into the large language model to synthesize the final image description. Extensive experiments on in-domain, near-domain, and cross-domain tasks show ClipCap+ + surpasses state-of-the-art models in accuracy, training efficiency, and generalization.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"180 ","pages":"Article 113469"},"PeriodicalIF":7.2000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156849462500780X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
ClipCap (CLIP prefix for image captioning), a leading image captioning model, exhibits limitations in recognizing images within specific domains. This study presents ClipCap+ +, an enhanced version of ClipCap that integrates key-value pair and residual connection modules. The key-value pair module implements a few-shot learning strategy by incorporating domain-specific knowledge, thereby improving the model's capability to recognize specialized image categories. The residual connection module optimizes the weight distribution between the pre-trained model and the key-value pair module, enhancing the model's transfer learning performance. During the inference phase, the model processes an input image through a multi-stage pipeline: (1) the visual encoder extracts image features to generate a hard visual prompt, (2) the key-value pair module dynamically constructs a domain-specific soft prompt, and (3) these complementary prompts are jointly fed into the large language model to synthesize the final image description. Extensive experiments on in-domain, near-domain, and cross-domain tasks show ClipCap+ + surpasses state-of-the-art models in accuracy, training efficiency, and generalization.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.