基于l2范数约束的零学习语义自编码器

Yuhao Wu, Weipeng Cao, Ye Liu, Zhong Ming, Jian-qiang Li, Bo Lu
{"title":"基于l2范数约束的零学习语义自编码器","authors":"Yuhao Wu, Weipeng Cao, Ye Liu, Zhong Ming, Jian-qiang Li, Bo Lu","doi":"10.1145/3457682.3457699","DOIUrl":null,"url":null,"abstract":"Zero-Shot Learning (ZSL) is an effective paradigm to solve label prediction when some classes have no training samples. In recent years, many ZSL algorithms have been proposed. Among them, semantic autoencoder (SAE) is widely used because of its simplicity and good generalization ability. However, our research found that most of the existing SAE based methods use implicit constraints to guarantee the mapping quality between feature space and semantic space. In fact, the implicit constraints are insufficient in minimizing the structural risk of the model and easy to cause the over-fitting problem. To solve this problem, we propose a novel SAE algorithm with the L2-norm constraint (SAE-L2) in this study. SAE-L2 adds the L2 regularization constraint to the mapping parameters in its optimization objective, which explicitly guarantees the structural risk minimization of the model. Extensive experiments on four benchmark datasets show that our proposed SAE-L2 can achieve better performance than the original SAE model and other ZSL algorithms.","PeriodicalId":142045,"journal":{"name":"2021 13th International Conference on Machine Learning and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Semantic Auto-Encoder with L2-norm Constraint for Zero-Shot Learning\",\"authors\":\"Yuhao Wu, Weipeng Cao, Ye Liu, Zhong Ming, Jian-qiang Li, Bo Lu\",\"doi\":\"10.1145/3457682.3457699\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Zero-Shot Learning (ZSL) is an effective paradigm to solve label prediction when some classes have no training samples. In recent years, many ZSL algorithms have been proposed. Among them, semantic autoencoder (SAE) is widely used because of its simplicity and good generalization ability. However, our research found that most of the existing SAE based methods use implicit constraints to guarantee the mapping quality between feature space and semantic space. In fact, the implicit constraints are insufficient in minimizing the structural risk of the model and easy to cause the over-fitting problem. To solve this problem, we propose a novel SAE algorithm with the L2-norm constraint (SAE-L2) in this study. SAE-L2 adds the L2 regularization constraint to the mapping parameters in its optimization objective, which explicitly guarantees the structural risk minimization of the model. Extensive experiments on four benchmark datasets show that our proposed SAE-L2 can achieve better performance than the original SAE model and other ZSL algorithms.\",\"PeriodicalId\":142045,\"journal\":{\"name\":\"2021 13th International Conference on Machine Learning and Computing\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 13th International Conference on Machine Learning and Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3457682.3457699\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 13th International Conference on Machine Learning and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3457682.3457699","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

零射击学习(Zero-Shot Learning, ZSL)是解决某些类没有训练样本时标签预测的有效范式。近年来,人们提出了许多ZSL算法。其中,语义自编码器(semantic autoencoder, SAE)以其简单、泛化能力好而得到广泛应用。然而,我们的研究发现,现有的基于SAE的方法大多使用隐式约束来保证特征空间和语义空间之间的映射质量。实际上,隐式约束不足以使模型的结构风险最小化,容易造成过拟合问题。为了解决这一问题,本研究提出了一种具有l2范数约束(SAE- l2)的新型SAE算法。SAE-L2在优化目标中对映射参数加入了L2正则化约束,明确保证了模型的结构风险最小化。在四个基准数据集上进行的大量实验表明,我们提出的SAE- l2算法比原始SAE模型和其他ZSL算法具有更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Semantic Auto-Encoder with L2-norm Constraint for Zero-Shot Learning
Zero-Shot Learning (ZSL) is an effective paradigm to solve label prediction when some classes have no training samples. In recent years, many ZSL algorithms have been proposed. Among them, semantic autoencoder (SAE) is widely used because of its simplicity and good generalization ability. However, our research found that most of the existing SAE based methods use implicit constraints to guarantee the mapping quality between feature space and semantic space. In fact, the implicit constraints are insufficient in minimizing the structural risk of the model and easy to cause the over-fitting problem. To solve this problem, we propose a novel SAE algorithm with the L2-norm constraint (SAE-L2) in this study. SAE-L2 adds the L2 regularization constraint to the mapping parameters in its optimization objective, which explicitly guarantees the structural risk minimization of the model. Extensive experiments on four benchmark datasets show that our proposed SAE-L2 can achieve better performance than the original SAE model and other ZSL algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信