Yu Lei, Ran Jing, Fangfang Li, Quanxue Gao, Cheng Deng
{"title":"A transformer-based dual contrastive learning approach for zero-shot learning","authors":"Yu Lei, Ran Jing, Fangfang Li, Quanxue Gao, Cheng Deng","doi":"10.1016/j.neucom.2025.129530","DOIUrl":null,"url":null,"abstract":"<div><div>The goal of zero-shot learning is to utilize attribute information for seen classes so as to generalize the learned knowledge to unseen classes. However, current algorithms often overlook the fact that the same attribute may exhibit different visual features across domains, leading to domain shift issues when transferring knowledge. Furthermore, in terms of visual feature extraction, networks like <em>ResNet</em> are not effective in capturing global information from images, adversely impacting recognition accuracy. To address these challenges, we propose an end-to-end <em>Transformer-Based Dual Contrastive Learning Approach</em> (TFDNet) for zero-shot learning. The network leverages the <em>Vision Transformer (ViT)</em> for extracting visual features and includes a mechanism for attribute localization to identify regions most relevant to the given attributes. Subsequently, it employs a dual contrastive learning method as a constraint, optimizing the learning process to better capture global feature representations. The proposed method makes the classifier more robust and enhances the ability to discriminate and generalize the unseen classes. Experimental results on three public datasets demonstrate the superiority of TFDNet over current state-of-the-art algorithms, validating its effectiveness in the field of zero-shot learning.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129530"},"PeriodicalIF":5.5000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225002024","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The goal of zero-shot learning is to utilize attribute information for seen classes so as to generalize the learned knowledge to unseen classes. However, current algorithms often overlook the fact that the same attribute may exhibit different visual features across domains, leading to domain shift issues when transferring knowledge. Furthermore, in terms of visual feature extraction, networks like ResNet are not effective in capturing global information from images, adversely impacting recognition accuracy. To address these challenges, we propose an end-to-end Transformer-Based Dual Contrastive Learning Approach (TFDNet) for zero-shot learning. The network leverages the Vision Transformer (ViT) for extracting visual features and includes a mechanism for attribute localization to identify regions most relevant to the given attributes. Subsequently, it employs a dual contrastive learning method as a constraint, optimizing the learning process to better capture global feature representations. The proposed method makes the classifier more robust and enhances the ability to discriminate and generalize the unseen classes. Experimental results on three public datasets demonstrate the superiority of TFDNet over current state-of-the-art algorithms, validating its effectiveness in the field of zero-shot learning.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.