Union Embedding and Backbone-Attention boost Zero-Shot Learning Model (UBZSL)

Ziyu Li
{"title":"Union Embedding and Backbone-Attention boost Zero-Shot Learning Model (UBZSL)","authors":"Ziyu Li","doi":"10.1109/IPAS55744.2022.10052972","DOIUrl":null,"url":null,"abstract":"Zero-Shot Learning (ZSL) aims to identify categories that are never seen during training. There are many ZSL methods available, and the number is steadily increasing. Even then, there are still some issues to be resolved, such as class embedding and image functions. Human-annotated attributes have been involved in recent work on class embedding. However, this type of attribute does not adequately represent the semantic and visual aspects of each class, and these annotating attributes are time-consuming. Furthermore, ZSL methods for extracting image features rely on the development of pre-trained image representations or fine-tuned models, focusing on learning appropriate functions between image representations and attributes. To reduce the dependency on manual annotation and improve the classification effectiveness, we believe that ZSL would benefit from using Contrastive Language-Image Pre-Training (CLIP) or combined with manual annotation. For this purpose, we propose an improved ZSL model named UBZSL. It uses CLIP combined with manual annotation as a class embedding method and uses an attention map for feature extraction. Experiments show that the performance of our ZSL model on the CUB dataset is greatly improved compared to the current model.","PeriodicalId":322228,"journal":{"name":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPAS55744.2022.10052972","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Zero-Shot Learning (ZSL) aims to identify categories that are never seen during training. There are many ZSL methods available, and the number is steadily increasing. Even then, there are still some issues to be resolved, such as class embedding and image functions. Human-annotated attributes have been involved in recent work on class embedding. However, this type of attribute does not adequately represent the semantic and visual aspects of each class, and these annotating attributes are time-consuming. Furthermore, ZSL methods for extracting image features rely on the development of pre-trained image representations or fine-tuned models, focusing on learning appropriate functions between image representations and attributes. To reduce the dependency on manual annotation and improve the classification effectiveness, we believe that ZSL would benefit from using Contrastive Language-Image Pre-Training (CLIP) or combined with manual annotation. For this purpose, we propose an improved ZSL model named UBZSL. It uses CLIP combined with manual annotation as a class embedding method and uses an attention map for feature extraction. Experiments show that the performance of our ZSL model on the CUB dataset is greatly improved compared to the current model.
联合嵌入和骨干注意力增强零射击学习模型(UBZSL)
零射击学习(ZSL)旨在识别训练中从未见过的类别。有许多可用的ZSL方法,并且数量正在稳步增加。即便如此,仍然有一些问题需要解决,比如类嵌入和图像函数。最近的类嵌入工作涉及到人工注释属性。但是,这种类型的属性不能充分表示每个类的语义和视觉方面,而且这些注释属性非常耗时。此外,提取图像特征的ZSL方法依赖于预训练图像表示或微调模型的开发,重点是学习图像表示和属性之间的适当函数。为了减少对人工标注的依赖,提高分类效率,我们认为使用对比语言图像预训练(CLIP)或与人工标注相结合将有利于ZSL分类。为此,我们提出了一种改进的ZSL模型,命名为UBZSL。它采用CLIP结合手工标注作为类嵌入方法,并使用注意图进行特征提取。实验表明,与现有模型相比,我们的ZSL模型在CUB数据集上的性能有了很大的提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信