HTransE: Hybrid Translation-based Embedding for Knowledge Graphs

A. Shah, Bonaventure Molokwu, Ziad Kobti
{"title":"HTransE: Hybrid Translation-based Embedding for Knowledge Graphs","authors":"A. Shah, Bonaventure Molokwu, Ziad Kobti","doi":"10.1109/ICKG55886.2022.00037","DOIUrl":null,"url":null,"abstract":"Basically, a Knowledge Graph (KG) is a graph variant that represents data via triplets comprising a head, a tail, and a relation. Realistically, most KGs are compiled either manually or semi-automatically, and this usually results in a significant loss of vital information with respect to the KG. Thus, this problem of incompleteness is common to virtually all KGs; and it is formally defined as Knowledge Graph Completion (KGC) problem. In this paper, we have explored learning the representations of a KGs with regard to its entities and relations for the purpose of any predicting missing link(s). In that regard, this paper proposes a hybrid variant, composed of TransE and SimplE models, for solving KGC problems. On one hand, the TransE model depicts a relation as the translation from the source entity (head) to the target entity (tail) within an embedding space. In TransE, the head and tail entities are derived from the same embedding-generation class, which results in a low prediction score. Also, the TransE model is not able to capture symmetric relationships as well as one-to-many relationships. On the other hand, the SimplE model is based on Canonical Polyadic (CP) decomposition. SimplE enhances CP via the addition of the inverse relation, while the head entity and tail entity are derived from different embedding-generation classes which are interdependent. Hence, we employed the principle of inverse-relation embedding (from the SimplE model) onto the native TransE model so as to yield a new hybrid resultant: HTransE. Therefore, HTransE boasts of efficiency as well as improved prediction scores. Efficiently, HTransE converges much quicker in comparison to TransE. In other words, HTransE converges at approximately $n/2$ iterations where $n$ denotes the iterations required to fully train TransE. Our results outperform the native TransE approach with a significant difference. Also, HTransE outperforms several state-of-the-art models on different datasets.","PeriodicalId":278067,"journal":{"name":"2022 IEEE International Conference on Knowledge Graph (ICKG)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Knowledge Graph (ICKG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICKG55886.2022.00037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Basically, a Knowledge Graph (KG) is a graph variant that represents data via triplets comprising a head, a tail, and a relation. Realistically, most KGs are compiled either manually or semi-automatically, and this usually results in a significant loss of vital information with respect to the KG. Thus, this problem of incompleteness is common to virtually all KGs; and it is formally defined as Knowledge Graph Completion (KGC) problem. In this paper, we have explored learning the representations of a KGs with regard to its entities and relations for the purpose of any predicting missing link(s). In that regard, this paper proposes a hybrid variant, composed of TransE and SimplE models, for solving KGC problems. On one hand, the TransE model depicts a relation as the translation from the source entity (head) to the target entity (tail) within an embedding space. In TransE, the head and tail entities are derived from the same embedding-generation class, which results in a low prediction score. Also, the TransE model is not able to capture symmetric relationships as well as one-to-many relationships. On the other hand, the SimplE model is based on Canonical Polyadic (CP) decomposition. SimplE enhances CP via the addition of the inverse relation, while the head entity and tail entity are derived from different embedding-generation classes which are interdependent. Hence, we employed the principle of inverse-relation embedding (from the SimplE model) onto the native TransE model so as to yield a new hybrid resultant: HTransE. Therefore, HTransE boasts of efficiency as well as improved prediction scores. Efficiently, HTransE converges much quicker in comparison to TransE. In other words, HTransE converges at approximately $n/2$ iterations where $n$ denotes the iterations required to fully train TransE. Our results outperform the native TransE approach with a significant difference. Also, HTransE outperforms several state-of-the-art models on different datasets.
基于混合翻译的知识图嵌入
基本上,知识图(KG)是一种图形变体,它通过包含头、尾和关系的三元组来表示数据。实际上,大多数KG都是手动或半自动编译的,这通常会导致与KG相关的重要信息的大量丢失。因此,这种不完备性问题对几乎所有kg都是常见的;将其正式定义为知识图谱补全问题。在本文中,我们探讨了学习关于其实体和关系的kg的表示,以预测任何缺失的环节。为此,本文提出了一种由TransE模型和SimplE模型组成的混合模型来解决KGC问题。一方面,TransE模型将关系描述为嵌入空间中从源实体(头部)到目标实体(尾部)的转换。在TransE中,头部和尾部实体来自相同的嵌入生成类,这导致预测分数较低。此外,TransE模型不能捕获对称关系以及一对多关系。另一方面,SimplE模型是基于正则Polyadic (CP)分解的。SimplE通过添加反比关系来增强CP,而头实体和尾实体是由相互依赖的不同嵌入生成类派生的。因此,我们将反相关嵌入原理(从SimplE模型)应用到原生TransE模型中,从而得到一个新的混合结果:HTransE。因此,HTransE在提高预测分数的同时也提高了效率。有效地,与TransE相比,HTransE收敛得更快。换句话说,HTransE收敛于大约$n/2$迭代,其中$n$表示完全训练TransE所需的迭代。我们的结果明显优于原生TransE方法。此外,HTransE在不同的数据集上优于几种最先进的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信