零学习残差图卷积网络

Jiwei Wei, Yang Yang, Jingjing Li, Lei Zhu, Lin Zuo, Heng Tao Shen
{"title":"零学习残差图卷积网络","authors":"Jiwei Wei, Yang Yang, Jingjing Li, Lei Zhu, Lin Zuo, Heng Tao Shen","doi":"10.1145/3338533.3366552","DOIUrl":null,"url":null,"abstract":"Most existing Zero-Shot Learning (ZSL) approaches adopt the semantic space as a bridge to classify unseen categories. However, it is difficult to transfer knowledge from seen categories to unseen categories through semantic space, since the correlations among categories are uncertain and ambiguous in the semantic space. In this paper, we formulated zero-shot learning as a classifier weight regression problem. Specifically, we propose a novel Residual Graph Convolution Network (ResGCN) which takes word embeddings and knowledge graph as inputs and outputs a visual classifier for each category. ResGCN can effectively alleviate the problem of over-smoothing and over-fitting. During the test, an unseen image can be classified by ranking the inner product of its visual feature and predictive visual classifiers. Moreover, we provide a new method to build a better knowledge graph. Our approach not only further enhances the correlations among categories, but also makes it easy to add new categories to the knowledge graph. Experiments conducted on the large-scale ImageNet 2011 21K dataset demonstrate that our method significantly outperforms existing state-of-the-art approaches.","PeriodicalId":273086,"journal":{"name":"Proceedings of the ACM Multimedia Asia","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Residual Graph Convolutional Networks for Zero-Shot Learning\",\"authors\":\"Jiwei Wei, Yang Yang, Jingjing Li, Lei Zhu, Lin Zuo, Heng Tao Shen\",\"doi\":\"10.1145/3338533.3366552\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most existing Zero-Shot Learning (ZSL) approaches adopt the semantic space as a bridge to classify unseen categories. However, it is difficult to transfer knowledge from seen categories to unseen categories through semantic space, since the correlations among categories are uncertain and ambiguous in the semantic space. In this paper, we formulated zero-shot learning as a classifier weight regression problem. Specifically, we propose a novel Residual Graph Convolution Network (ResGCN) which takes word embeddings and knowledge graph as inputs and outputs a visual classifier for each category. ResGCN can effectively alleviate the problem of over-smoothing and over-fitting. During the test, an unseen image can be classified by ranking the inner product of its visual feature and predictive visual classifiers. Moreover, we provide a new method to build a better knowledge graph. Our approach not only further enhances the correlations among categories, but also makes it easy to add new categories to the knowledge graph. Experiments conducted on the large-scale ImageNet 2011 21K dataset demonstrate that our method significantly outperforms existing state-of-the-art approaches.\",\"PeriodicalId\":273086,\"journal\":{\"name\":\"Proceedings of the ACM Multimedia Asia\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Multimedia Asia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3338533.3366552\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Multimedia Asia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3338533.3366552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

摘要

大多数现有的零间隔学习(Zero-Shot Learning, ZSL)方法采用语义空间作为桥梁对未见过的类别进行分类。然而,由于类别之间的相关性在语义空间中是不确定的和模糊的,因此很难通过语义空间将知识从可见类别转移到不可见类别。在本文中,我们将零次学习表述为一个分类器权重回归问题。具体来说,我们提出了一种新的残差图卷积网络(ResGCN),它以词嵌入和知识图作为输入,并为每个类别输出一个视觉分类器。ResGCN可以有效地缓解过平滑和过拟合的问题。在测试过程中,通过对未见图像的视觉特征和预测视觉分类器的内积进行排序,可以对未见图像进行分类。此外,我们还提供了一种构建更好的知识图谱的新方法。我们的方法不仅进一步增强了类别之间的相关性,而且使向知识图添加新类别变得容易。在大规模ImageNet 2011 21K数据集上进行的实验表明,我们的方法明显优于现有的最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Residual Graph Convolutional Networks for Zero-Shot Learning
Most existing Zero-Shot Learning (ZSL) approaches adopt the semantic space as a bridge to classify unseen categories. However, it is difficult to transfer knowledge from seen categories to unseen categories through semantic space, since the correlations among categories are uncertain and ambiguous in the semantic space. In this paper, we formulated zero-shot learning as a classifier weight regression problem. Specifically, we propose a novel Residual Graph Convolution Network (ResGCN) which takes word embeddings and knowledge graph as inputs and outputs a visual classifier for each category. ResGCN can effectively alleviate the problem of over-smoothing and over-fitting. During the test, an unseen image can be classified by ranking the inner product of its visual feature and predictive visual classifiers. Moreover, we provide a new method to build a better knowledge graph. Our approach not only further enhances the correlations among categories, but also makes it easy to add new categories to the knowledge graph. Experiments conducted on the large-scale ImageNet 2011 21K dataset demonstrate that our method significantly outperforms existing state-of-the-art approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信