Multi-Label Zero-Shot Learning Via Contrastive Label-Based Attention.

International journal of neural systems Pub Date : 2025-03-01 Epub Date: 2025-01-23 DOI:10.1142/S0129065725500108
Shixuan Meng, Rongxin Jiang, Xiang Tian, Fan Zhou, Yaowu Chen, Junjie Liu, Chen Shen
{"title":"Multi-Label Zero-Shot Learning Via Contrastive Label-Based Attention.","authors":"Shixuan Meng, Rongxin Jiang, Xiang Tian, Fan Zhou, Yaowu Chen, Junjie Liu, Chen Shen","doi":"10.1142/S0129065725500108","DOIUrl":null,"url":null,"abstract":"<p><p>Multi-label zero-shot learning (ML-ZSL) strives to recognize all objects in an image, regardless of whether they are present in the training data. Recent methods incorporate an attention mechanism to locate labels in the image and generate class-specific semantic information. However, the attention mechanism built on visual features treats label embeddings equally in the prediction score, leading to severe semantic ambiguity. This study focuses on efficiently utilizing semantic information in the attention mechanism. We propose a contrastive label-based attention method (CLA) to associate each label with the most relevant image regions. Specifically, our label-based attention, guided by the latent label embedding, captures discriminative image details. To distinguish region-wise correlations, we implement a region-level contrastive loss. In addition, we utilize a global feature alignment module to identify labels with general information. Extensive experiments on two benchmarks, NUS-WIDE and Open Images, demonstrate that our CLA outperforms the state-of-the-art methods. Especially under the ZSL setting, our method achieves 2.0% improvements in mean Average Precision (mAP) for NUS-WIDE and 4.0% for Open Images compared with recent methods.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550010"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of neural systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S0129065725500108","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/23 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-label zero-shot learning (ML-ZSL) strives to recognize all objects in an image, regardless of whether they are present in the training data. Recent methods incorporate an attention mechanism to locate labels in the image and generate class-specific semantic information. However, the attention mechanism built on visual features treats label embeddings equally in the prediction score, leading to severe semantic ambiguity. This study focuses on efficiently utilizing semantic information in the attention mechanism. We propose a contrastive label-based attention method (CLA) to associate each label with the most relevant image regions. Specifically, our label-based attention, guided by the latent label embedding, captures discriminative image details. To distinguish region-wise correlations, we implement a region-level contrastive loss. In addition, we utilize a global feature alignment module to identify labels with general information. Extensive experiments on two benchmarks, NUS-WIDE and Open Images, demonstrate that our CLA outperforms the state-of-the-art methods. Especially under the ZSL setting, our method achieves 2.0% improvements in mean Average Precision (mAP) for NUS-WIDE and 4.0% for Open Images compared with recent methods.

基于对比标签注意的多标签零学习。
多标签零射击学习(ML-ZSL)努力识别图像中的所有物体,而不管它们是否存在于训练数据中。最近的方法结合了注意机制来定位图像中的标签并生成特定于类的语义信息。然而,建立在视觉特征基础上的注意机制在预测得分上对标签嵌入的处理是平等的,导致了严重的语义歧义。本研究的重点是在注意机制中有效地利用语义信息。我们提出了一种基于对比标签的注意力方法(CLA),将每个标签与最相关的图像区域关联起来。具体来说,我们的基于标签的注意力,在潜在标签嵌入的指导下,捕捉到有区别的图像细节。为了区分区域相关,我们实现了区域级对比损失。此外,我们利用一个全局特征对齐模块来识别具有一般信息的标签。在NUS-WIDE和Open Images两个基准上进行的大量实验表明,我们的CLA优于最先进的方法。特别是在ZSL设置下,与现有方法相比,我们的方法在NUS-WIDE的平均平均精度(mAP)上提高了2.0%,在开放图像上提高了4.0%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信