挖掘细粒度属性,实现少镜头学习中的视觉语义集成

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Juan Zhao , Lili Kong , Deshang Sun , Deng Xiong , Jiancheng Lv
{"title":"挖掘细粒度属性,实现少镜头学习中的视觉语义集成","authors":"Juan Zhao ,&nbsp;Lili Kong ,&nbsp;Deshang Sun ,&nbsp;Deng Xiong ,&nbsp;Jiancheng Lv","doi":"10.1016/j.imavis.2025.105739","DOIUrl":null,"url":null,"abstract":"<div><div>Recent advancements in Few-Shot Learning (FSL) have been significantly driven by leveraging semantic descriptions to enhance feature discrimination and recognition performance. However, existing methods, such as SemFew, often rely on verbose or manually curated attributes and apply semantic guidance only to the support set, limiting their effectiveness in distinguishing fine-grained categories. Inspired by human visual perception, which emphasizes crucial features for accurate recognition, this study introduces concise, fine-grained semantic attributes to address these limitations. We propose a Visual Attribute Enhancement (VAE) mechanism that integrates enriched semantic information into visual features, enabling the model to highlight the most relevant visual attributes and better distinguish visually similar samples. This module enhances visual features by aligning them with semantic attribute embeddings through a cross-attention mechanism and optimizes this alignment using an attribute-based cross-entropy loss. Furthermore, to mitigate the performance degradation caused by methods that supply semantic information exclusively to the support set, we propose a semantic attribute reconstruction (SAR) module. This module predicts and integrates semantic features for query samples, ensuring balanced information distribution between the support and query sets. Specifically, SAR enhances query representations by aligning and reconstructing semantic and visual attributes through regression and optimal transport losses to ensure semantic–visual consistency. Experiments on five benchmark datasets, including both general datasets and more challenging fine-grained Few-Shot datasets consistently demonstrate that our proposed method outperforms state-of-the-art methods in both 5-way 1-shot and 5-way 5-shot settings.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"163 ","pages":"Article 105739"},"PeriodicalIF":4.2000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mining fine-grained attributes for vision–semantics integration in few-shot learning\",\"authors\":\"Juan Zhao ,&nbsp;Lili Kong ,&nbsp;Deshang Sun ,&nbsp;Deng Xiong ,&nbsp;Jiancheng Lv\",\"doi\":\"10.1016/j.imavis.2025.105739\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent advancements in Few-Shot Learning (FSL) have been significantly driven by leveraging semantic descriptions to enhance feature discrimination and recognition performance. However, existing methods, such as SemFew, often rely on verbose or manually curated attributes and apply semantic guidance only to the support set, limiting their effectiveness in distinguishing fine-grained categories. Inspired by human visual perception, which emphasizes crucial features for accurate recognition, this study introduces concise, fine-grained semantic attributes to address these limitations. We propose a Visual Attribute Enhancement (VAE) mechanism that integrates enriched semantic information into visual features, enabling the model to highlight the most relevant visual attributes and better distinguish visually similar samples. This module enhances visual features by aligning them with semantic attribute embeddings through a cross-attention mechanism and optimizes this alignment using an attribute-based cross-entropy loss. Furthermore, to mitigate the performance degradation caused by methods that supply semantic information exclusively to the support set, we propose a semantic attribute reconstruction (SAR) module. This module predicts and integrates semantic features for query samples, ensuring balanced information distribution between the support and query sets. Specifically, SAR enhances query representations by aligning and reconstructing semantic and visual attributes through regression and optimal transport losses to ensure semantic–visual consistency. Experiments on five benchmark datasets, including both general datasets and more challenging fine-grained Few-Shot datasets consistently demonstrate that our proposed method outperforms state-of-the-art methods in both 5-way 1-shot and 5-way 5-shot settings.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"163 \",\"pages\":\"Article 105739\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625003270\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625003270","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

近年来,利用语义描述来增强特征识别和识别性能,极大地推动了Few-Shot Learning (FSL)的发展。然而,现有的方法,如SemFew,通常依赖于冗长的或手动策划的属性,并且只对支持集应用语义指导,限制了它们在区分细粒度类别方面的有效性。受人类视觉感知(强调准确识别的关键特征)的启发,本研究引入简洁、细粒度的语义属性来解决这些限制。我们提出了一种视觉属性增强(VAE)机制,将丰富的语义信息集成到视觉特征中,使模型能够突出最相关的视觉属性,更好地区分视觉相似的样本。该模块通过交叉注意机制将视觉特征与语义属性嵌入对齐来增强视觉特征,并使用基于属性的交叉熵损失来优化这种对齐。此外,为了减轻仅向支持集提供语义信息的方法导致的性能下降,我们提出了语义属性重建(SAR)模块。该模块预测和集成查询样本的语义特征,确保支持集和查询集之间的信息均衡分布。具体而言,SAR通过回归和优化传输损失来对齐和重建语义和视觉属性,从而增强查询表示,以确保语义和视觉的一致性。在五个基准数据集(包括一般数据集和更具挑战性的细粒度Few-Shot数据集)上的实验一致表明,我们提出的方法在5-way 1-shot和5-way 5-shot设置中都优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Mining fine-grained attributes for vision–semantics integration in few-shot learning
Recent advancements in Few-Shot Learning (FSL) have been significantly driven by leveraging semantic descriptions to enhance feature discrimination and recognition performance. However, existing methods, such as SemFew, often rely on verbose or manually curated attributes and apply semantic guidance only to the support set, limiting their effectiveness in distinguishing fine-grained categories. Inspired by human visual perception, which emphasizes crucial features for accurate recognition, this study introduces concise, fine-grained semantic attributes to address these limitations. We propose a Visual Attribute Enhancement (VAE) mechanism that integrates enriched semantic information into visual features, enabling the model to highlight the most relevant visual attributes and better distinguish visually similar samples. This module enhances visual features by aligning them with semantic attribute embeddings through a cross-attention mechanism and optimizes this alignment using an attribute-based cross-entropy loss. Furthermore, to mitigate the performance degradation caused by methods that supply semantic information exclusively to the support set, we propose a semantic attribute reconstruction (SAR) module. This module predicts and integrates semantic features for query samples, ensuring balanced information distribution between the support and query sets. Specifically, SAR enhances query representations by aligning and reconstructing semantic and visual attributes through regression and optimal transport losses to ensure semantic–visual consistency. Experiments on five benchmark datasets, including both general datasets and more challenging fine-grained Few-Shot datasets consistently demonstrate that our proposed method outperforms state-of-the-art methods in both 5-way 1-shot and 5-way 5-shot settings.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信