Yuhao Wu, Weipeng Cao, Ye Liu, Zhong Ming, Jian-qiang Li, Bo Lu
{"title":"基于l2范数约束的零学习语义自编码器","authors":"Yuhao Wu, Weipeng Cao, Ye Liu, Zhong Ming, Jian-qiang Li, Bo Lu","doi":"10.1145/3457682.3457699","DOIUrl":null,"url":null,"abstract":"Zero-Shot Learning (ZSL) is an effective paradigm to solve label prediction when some classes have no training samples. In recent years, many ZSL algorithms have been proposed. Among them, semantic autoencoder (SAE) is widely used because of its simplicity and good generalization ability. However, our research found that most of the existing SAE based methods use implicit constraints to guarantee the mapping quality between feature space and semantic space. In fact, the implicit constraints are insufficient in minimizing the structural risk of the model and easy to cause the over-fitting problem. To solve this problem, we propose a novel SAE algorithm with the L2-norm constraint (SAE-L2) in this study. SAE-L2 adds the L2 regularization constraint to the mapping parameters in its optimization objective, which explicitly guarantees the structural risk minimization of the model. Extensive experiments on four benchmark datasets show that our proposed SAE-L2 can achieve better performance than the original SAE model and other ZSL algorithms.","PeriodicalId":142045,"journal":{"name":"2021 13th International Conference on Machine Learning and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Semantic Auto-Encoder with L2-norm Constraint for Zero-Shot Learning\",\"authors\":\"Yuhao Wu, Weipeng Cao, Ye Liu, Zhong Ming, Jian-qiang Li, Bo Lu\",\"doi\":\"10.1145/3457682.3457699\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Zero-Shot Learning (ZSL) is an effective paradigm to solve label prediction when some classes have no training samples. In recent years, many ZSL algorithms have been proposed. Among them, semantic autoencoder (SAE) is widely used because of its simplicity and good generalization ability. However, our research found that most of the existing SAE based methods use implicit constraints to guarantee the mapping quality between feature space and semantic space. In fact, the implicit constraints are insufficient in minimizing the structural risk of the model and easy to cause the over-fitting problem. To solve this problem, we propose a novel SAE algorithm with the L2-norm constraint (SAE-L2) in this study. SAE-L2 adds the L2 regularization constraint to the mapping parameters in its optimization objective, which explicitly guarantees the structural risk minimization of the model. Extensive experiments on four benchmark datasets show that our proposed SAE-L2 can achieve better performance than the original SAE model and other ZSL algorithms.\",\"PeriodicalId\":142045,\"journal\":{\"name\":\"2021 13th International Conference on Machine Learning and Computing\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 13th International Conference on Machine Learning and Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3457682.3457699\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 13th International Conference on Machine Learning and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3457682.3457699","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semantic Auto-Encoder with L2-norm Constraint for Zero-Shot Learning
Zero-Shot Learning (ZSL) is an effective paradigm to solve label prediction when some classes have no training samples. In recent years, many ZSL algorithms have been proposed. Among them, semantic autoencoder (SAE) is widely used because of its simplicity and good generalization ability. However, our research found that most of the existing SAE based methods use implicit constraints to guarantee the mapping quality between feature space and semantic space. In fact, the implicit constraints are insufficient in minimizing the structural risk of the model and easy to cause the over-fitting problem. To solve this problem, we propose a novel SAE algorithm with the L2-norm constraint (SAE-L2) in this study. SAE-L2 adds the L2 regularization constraint to the mapping parameters in its optimization objective, which explicitly guarantees the structural risk minimization of the model. Extensive experiments on four benchmark datasets show that our proposed SAE-L2 can achieve better performance than the original SAE model and other ZSL algorithms.