{"title":"DoA-ViT: Dual-objective Affine Vision Transformer for Data Insufficiency","authors":"Qiang Ren, Junli Wang","doi":"10.1016/j.neucom.2024.128896","DOIUrl":null,"url":null,"abstract":"<div><div>Vision Transformers (ViTs) excel in large-scale image recognition tasks but struggle with limited data due to ineffective patch-level local information utilization. Existing methods focus on enhancing local representations at the model level but often treat all features equally, leading to noise from irrelevant information. Effectively distinguishing between discriminative features and irrelevant information helps minimize the interference of noise at the model level. To tackle this, we introduce Dual-objective Affine Vision Transformer (DoA-ViT), which enhances ViTs for data-limited tasks by improving feature discrimination. DoA-ViT incorporates a learnable affine transformation that associates transformed features with class-specific ones while preserving their intrinsic features. Additionally, an adaptive patch-based enhancement mechanism is designed to assign importance scores to patches, minimizing the impact of irrelevant information. These enhancements can be seamlessly integrated into existing ViTs as plug-and-play components. Extensive experiments on small-scale datasets show that DoA-ViT consistently outperforms existing methods, with visualization results highlighting its ability to identify critical image regions effectively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"615 ","pages":"Article 128896"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224016679","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Vision Transformers (ViTs) excel in large-scale image recognition tasks but struggle with limited data due to ineffective patch-level local information utilization. Existing methods focus on enhancing local representations at the model level but often treat all features equally, leading to noise from irrelevant information. Effectively distinguishing between discriminative features and irrelevant information helps minimize the interference of noise at the model level. To tackle this, we introduce Dual-objective Affine Vision Transformer (DoA-ViT), which enhances ViTs for data-limited tasks by improving feature discrimination. DoA-ViT incorporates a learnable affine transformation that associates transformed features with class-specific ones while preserving their intrinsic features. Additionally, an adaptive patch-based enhancement mechanism is designed to assign importance scores to patches, minimizing the impact of irrelevant information. These enhancements can be seamlessly integrated into existing ViTs as plug-and-play components. Extensive experiments on small-scale datasets show that DoA-ViT consistently outperforms existing methods, with visualization results highlighting its ability to identify critical image regions effectively.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.