Ontological Concept Structure Aware Knowledge Transfer for Inductive Knowledge Graph Embedding

Chao Ren, Le Zhang, Lintao Fang, Tong Xu, Zhefeng Wang, Senchao Yuan, Enhong Chen
{"title":"Ontological Concept Structure Aware Knowledge Transfer for Inductive Knowledge Graph Embedding","authors":"Chao Ren, Le Zhang, Lintao Fang, Tong Xu, Zhefeng Wang, Senchao Yuan, Enhong Chen","doi":"10.1109/IJCNN52387.2021.9533852","DOIUrl":null,"url":null,"abstract":"Conventional knowledge graph embedding methods mainly assume that all entities at reasoning stage are available in the original training graph. But in real-world application scenarios, newly emerged entities are always inevitable, which results in the severe problem of out-of-knowledge-graph entities. Existing efforts on this issue mostly either utilize additional resources, e.g., entity descriptions, or simply aggregate in-knowledge-graph neighbors to embed these new entities inductively. However, high-quality additional resources are usually hard to obtain and existing neighbors of new entities may be too sparse to provide enough information for modeling these entities. Meanwhile, they may fail to integrate the rich information of ontological concepts, which provide a general figure of instance entities and usually remain unchanged in knowledge graph. To this end, we propose a novel inductive framework namely CatE to solve the sparsity problem with the enhancement from ontological concepts. Specifically, we first adopt the transformer encoder to model the complex contextual structure of the ontological concepts. Then, we further develop a template refinement strategy for generating the target entity embedding, where the concept embedding is used to form a basic skeleton of the target entity and the individual characteristics of the entity will be enriched by its existing neighbors. Finally, extensive experiments on public datasets demonstrate the effectiveness of our proposed model compared with state-of-the-art baseline methods.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9533852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Conventional knowledge graph embedding methods mainly assume that all entities at reasoning stage are available in the original training graph. But in real-world application scenarios, newly emerged entities are always inevitable, which results in the severe problem of out-of-knowledge-graph entities. Existing efforts on this issue mostly either utilize additional resources, e.g., entity descriptions, or simply aggregate in-knowledge-graph neighbors to embed these new entities inductively. However, high-quality additional resources are usually hard to obtain and existing neighbors of new entities may be too sparse to provide enough information for modeling these entities. Meanwhile, they may fail to integrate the rich information of ontological concepts, which provide a general figure of instance entities and usually remain unchanged in knowledge graph. To this end, we propose a novel inductive framework namely CatE to solve the sparsity problem with the enhancement from ontological concepts. Specifically, we first adopt the transformer encoder to model the complex contextual structure of the ontological concepts. Then, we further develop a template refinement strategy for generating the target entity embedding, where the concept embedding is used to form a basic skeleton of the target entity and the individual characteristics of the entity will be enriched by its existing neighbors. Finally, extensive experiments on public datasets demonstrate the effectiveness of our proposed model compared with state-of-the-art baseline methods.
基于本体概念结构感知的归纳知识图嵌入知识转移
传统的知识图嵌入方法主要假设推理阶段的所有实体都存在于原始训练图中。但在实际应用场景中,总会不可避免地出现新的实体,这就导致了严重的知识图谱外实体问题。在这个问题上的现有努力主要是利用额外的资源,例如实体描述,或者简单地聚合知识图中的邻居来归纳地嵌入这些新实体。然而,高质量的附加资源通常很难获得,并且新实体的现有邻居可能过于稀疏,无法为这些实体建模提供足够的信息。同时,它们可能无法整合本体概念的丰富信息,本体概念提供了实例实体的一般图形,通常在知识图中保持不变。为此,我们提出了一种新的归纳框架,即从本体概念增强的归纳框架来解决稀疏性问题。具体来说,我们首先采用转换编码器对本体概念的复杂上下文结构进行建模。然后,我们进一步开发了一种用于生成目标实体嵌入的模板细化策略,其中概念嵌入用于形成目标实体的基本骨架,实体的个体特征将被其现有的邻居所丰富。最后,在公共数据集上进行的大量实验表明,与最先进的基线方法相比,我们提出的模型是有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信