Connecting the Semantic Dots: Zero-shot Learning with Self-Aligning Autoencoders and a New Contrastive-Loss for Negative Sampling

Mohammed Terry-Jack, N. Rozanov
{"title":"Connecting the Semantic Dots: Zero-shot Learning with Self-Aligning Autoencoders and a New Contrastive-Loss for Negative Sampling","authors":"Mohammed Terry-Jack, N. Rozanov","doi":"10.1109/ICMLA55696.2022.00236","DOIUrl":null,"url":null,"abstract":"We introduce a novel zero-shot learning (ZSL) method, known as ‘self-alignment training’, and use it to train a vanilla autoencoder which is then evaluated on four prominent ZSL Tasks CUB, SUN, AWA1&2. Despite being a far simpler model than the competition, our method achieved results on par with SOTA. In addition, we also present a novel ‘contrastive-loss’ objective to allow autoencoders to learn from negative samples. In particular, we achieve new SOTA of 64.5 on AWA2 for Generalised ZSL and a new SOTA for standard ZSL of 47.7 on SUN. The code is publicly accessible on https://github.com/Wluper/satae.","PeriodicalId":128160,"journal":{"name":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA55696.2022.00236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We introduce a novel zero-shot learning (ZSL) method, known as ‘self-alignment training’, and use it to train a vanilla autoencoder which is then evaluated on four prominent ZSL Tasks CUB, SUN, AWA1&2. Despite being a far simpler model than the competition, our method achieved results on par with SOTA. In addition, we also present a novel ‘contrastive-loss’ objective to allow autoencoders to learn from negative samples. In particular, we achieve new SOTA of 64.5 on AWA2 for Generalised ZSL and a new SOTA for standard ZSL of 47.7 on SUN. The code is publicly accessible on https://github.com/Wluper/satae.
连接语义点:使用自对准自编码器的零射击学习和一种新的负采样对比损失
我们引入了一种新的零射击学习(ZSL)方法,称为“自对准训练”,并使用它来训练一个香草自编码器,然后在四个突出的ZSL任务CUB, SUN, awa1和2上进行评估。尽管是一个比竞争对手简单得多的模型,但我们的方法取得了与SOTA相当的结果。此外,我们还提出了一种新的“对比损失”目标,允许自编码器从负样本中学习。特别是,我们在AWA2上实现了通用ZSL的新SOTA为64.5,在SUN上实现了标准ZSL的新SOTA为47.7。该代码可在https://github.com/Wluper/satae上公开访问。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信