{"title":"Adaptive integration of textual context and visual embeddings for underrepresented vision classification","authors":"Seongyeop Kim , Hyung-Il Kim , Yong Man Ro","doi":"10.1016/j.patcog.2025.112420","DOIUrl":null,"url":null,"abstract":"<div><div>The advancement of deep learning has significantly improved image classification performance; however, handling long-tail distributions remains challenging due to the limited data available for rare classes. Existing approaches predominantly focus on visual features, often neglecting the valuable contextual information provided by textual data, which can be especially beneficial for classes with sparse visual examples. In this work, we introduce a novel method addressing this limitation by integrating textual data generated by advanced language models with visual inputs through our newly proposed Adaptive Integration Block for Vision-Text Synergy (AIB-VTS). Specifically designed for Vision Transformer architectures, AIB-VTS adaptively balances visual and textual information during inference, effectively utilizing textual descriptions generated from large language models. Extensive experiments on benchmark datasets demonstrate substantial performance improvements across all class groups, particularly in underrepresented (tail) classes. These results confirm the effectiveness of our approach in leveraging textual context to mitigate data scarcity issues and enhance model robustness.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"172 ","pages":"Article 112420"},"PeriodicalIF":7.6000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325010817","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The advancement of deep learning has significantly improved image classification performance; however, handling long-tail distributions remains challenging due to the limited data available for rare classes. Existing approaches predominantly focus on visual features, often neglecting the valuable contextual information provided by textual data, which can be especially beneficial for classes with sparse visual examples. In this work, we introduce a novel method addressing this limitation by integrating textual data generated by advanced language models with visual inputs through our newly proposed Adaptive Integration Block for Vision-Text Synergy (AIB-VTS). Specifically designed for Vision Transformer architectures, AIB-VTS adaptively balances visual and textual information during inference, effectively utilizing textual descriptions generated from large language models. Extensive experiments on benchmark datasets demonstrate substantial performance improvements across all class groups, particularly in underrepresented (tail) classes. These results confirm the effectiveness of our approach in leveraging textual context to mitigate data scarcity issues and enhance model robustness.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.