{"title":"DGPrompt:双引导提示生成视觉语言模型","authors":"Tai Zheng, Zhen-Duo Chen, Zi-Chao Zhang, Zhen-Xiang Ma, Li-Jun Zhao, Chong-Yu Zhang, Xin Luo, Xin-Shun Xu","doi":"10.1016/j.neunet.2025.107472","DOIUrl":null,"url":null,"abstract":"<div><div>Introducing learnable prompts into CLIP and fine-tuning them have demonstrated excellent performance across many downstream tasks. However, existing methods have insufficient interaction between modalities and neglect the importance of hierarchical contextual information, leading to ineffective alignment in both the visual and textual representation spaces. Additionally, CLIP is highly sensitive to prompts, making learnable prompts prone to overfitting on seen classes, which results in the forgetting of general knowledge of CLIP and severely impair generalization ability on unseen classes. To address these issues, we propose an original <span><math><mi>D</mi></math></span>ual-<span><math><mi>G</mi></math></span>uidance <span><math><mi>Prompt</mi></math></span>s Generation (<span><math><mi>DGPrompt</mi></math></span>) method that promotes alignment between visual and textual spaces while ensuring the continuous retention of general knowledge. The main ideas of DGPrompt are as follows: 1) The extraction of image and text embeddings are guided mutually by generating visual and textual prompts, making full use of complementary information from both modalities to align visual and textual spaces. 2) The prompt-tuning process is restrained by a retention module, reducing the forgetting of general knowledge. Extensive experiments conducted in settings of base-to-new class generalization and few-shot learning demonstrate the superiority of the proposed method. Compared with the baseline method CLIP and the state-of-the-art method MaPLe, DGPrompt exhibits favorable performance and achieves an absolute gain of 7.84% and 0.99% on overall harmonic mean, averaged over 11 diverse image recognition datasets.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107472"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DGPrompt: Dual-guidance prompts generation for vision-language models\",\"authors\":\"Tai Zheng, Zhen-Duo Chen, Zi-Chao Zhang, Zhen-Xiang Ma, Li-Jun Zhao, Chong-Yu Zhang, Xin Luo, Xin-Shun Xu\",\"doi\":\"10.1016/j.neunet.2025.107472\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Introducing learnable prompts into CLIP and fine-tuning them have demonstrated excellent performance across many downstream tasks. However, existing methods have insufficient interaction between modalities and neglect the importance of hierarchical contextual information, leading to ineffective alignment in both the visual and textual representation spaces. Additionally, CLIP is highly sensitive to prompts, making learnable prompts prone to overfitting on seen classes, which results in the forgetting of general knowledge of CLIP and severely impair generalization ability on unseen classes. To address these issues, we propose an original <span><math><mi>D</mi></math></span>ual-<span><math><mi>G</mi></math></span>uidance <span><math><mi>Prompt</mi></math></span>s Generation (<span><math><mi>DGPrompt</mi></math></span>) method that promotes alignment between visual and textual spaces while ensuring the continuous retention of general knowledge. The main ideas of DGPrompt are as follows: 1) The extraction of image and text embeddings are guided mutually by generating visual and textual prompts, making full use of complementary information from both modalities to align visual and textual spaces. 2) The prompt-tuning process is restrained by a retention module, reducing the forgetting of general knowledge. Extensive experiments conducted in settings of base-to-new class generalization and few-shot learning demonstrate the superiority of the proposed method. Compared with the baseline method CLIP and the state-of-the-art method MaPLe, DGPrompt exhibits favorable performance and achieves an absolute gain of 7.84% and 0.99% on overall harmonic mean, averaged over 11 diverse image recognition datasets.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"188 \",\"pages\":\"Article 107472\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-04-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S089360802500351X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S089360802500351X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
DGPrompt: Dual-guidance prompts generation for vision-language models
Introducing learnable prompts into CLIP and fine-tuning them have demonstrated excellent performance across many downstream tasks. However, existing methods have insufficient interaction between modalities and neglect the importance of hierarchical contextual information, leading to ineffective alignment in both the visual and textual representation spaces. Additionally, CLIP is highly sensitive to prompts, making learnable prompts prone to overfitting on seen classes, which results in the forgetting of general knowledge of CLIP and severely impair generalization ability on unseen classes. To address these issues, we propose an original ual-uidance s Generation () method that promotes alignment between visual and textual spaces while ensuring the continuous retention of general knowledge. The main ideas of DGPrompt are as follows: 1) The extraction of image and text embeddings are guided mutually by generating visual and textual prompts, making full use of complementary information from both modalities to align visual and textual spaces. 2) The prompt-tuning process is restrained by a retention module, reducing the forgetting of general knowledge. Extensive experiments conducted in settings of base-to-new class generalization and few-shot learning demonstrate the superiority of the proposed method. Compared with the baseline method CLIP and the state-of-the-art method MaPLe, DGPrompt exhibits favorable performance and achieves an absolute gain of 7.84% and 0.99% on overall harmonic mean, averaged over 11 diverse image recognition datasets.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.