Representation Learning of Knowledge Graph Integrating Entity Description and Language Morphological Structure Information

Xiaojuan Du, Yizheng Tao, Gongliang Li
{"title":"Representation Learning of Knowledge Graph Integrating Entity Description and Language Morphological Structure Information","authors":"Xiaojuan Du, Yizheng Tao, Gongliang Li","doi":"10.1109/icicse55337.2022.9828957","DOIUrl":null,"url":null,"abstract":"Knowledge graph embedding, which projects the symbolic relations and entities onto low-dimension continuous spaces, is the key to knowledge graph completion. The representation learning methods based on translation, such as TransE, TransH and TransR, only consider the triple information of knowledge graph, and fail to make effective use of other information of entity. To solve these problems, in this paper, we propose a knowledge graph representation learning method which integrates entity description and language morphological structure information to deal with complex relations (i.e. 1-N, N-1 and N-N relations). Firstly, the fastText model which considers affix of words is used to get the embedding of all entity description information. Then, the triple embedding, entity description embedding are spliced to obtain the representation of the final entity embedding. In addition, we propose a new score function-distcos–man, which considers the similarity of entity vector not only from the value of each dimension, but also from the direction of vectors. Experiments show that our method achieves substantial improvements against state-of-the-art baselines, especially the Hit@10s of head entity prediction for N-1 relations and tail entity prediction for 1-N relations improved by about 11.6% and 17.9% on FB15K database respectively.","PeriodicalId":177985,"journal":{"name":"2022 IEEE 2nd International Conference on Information Communication and Software Engineering (ICICSE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 2nd International Conference on Information Communication and Software Engineering (ICICSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icicse55337.2022.9828957","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Knowledge graph embedding, which projects the symbolic relations and entities onto low-dimension continuous spaces, is the key to knowledge graph completion. The representation learning methods based on translation, such as TransE, TransH and TransR, only consider the triple information of knowledge graph, and fail to make effective use of other information of entity. To solve these problems, in this paper, we propose a knowledge graph representation learning method which integrates entity description and language morphological structure information to deal with complex relations (i.e. 1-N, N-1 and N-N relations). Firstly, the fastText model which considers affix of words is used to get the embedding of all entity description information. Then, the triple embedding, entity description embedding are spliced to obtain the representation of the final entity embedding. In addition, we propose a new score function-distcos–man, which considers the similarity of entity vector not only from the value of each dimension, but also from the direction of vectors. Experiments show that our method achieves substantial improvements against state-of-the-art baselines, especially the Hit@10s of head entity prediction for N-1 relations and tail entity prediction for 1-N relations improved by about 11.6% and 17.9% on FB15K database respectively.
整合实体描述和语言形态结构信息的知识图表示学习
知识图嵌入是知识图补全的关键,它将符号关系和实体投影到低维连续空间中。transse、TransH、TransR等基于翻译的表示学习方法只考虑了知识图的三重信息,未能有效利用实体的其他信息。为了解决这些问题,本文提出了一种整合实体描述和语言形态结构信息的知识图表示学习方法来处理复杂关系(即1-N、N-1和N-N关系)。首先,采用考虑词缀的fastText模型对实体描述信息进行嵌入;然后,将三层嵌入和实体描述嵌入进行拼接,得到最终实体嵌入的表示形式。此外,我们提出了一个新的评分函数-distcos - man,它不仅从各个维度的值考虑实体向量的相似性,而且从向量的方向考虑实体向量的相似性。实验表明,我们的方法在最先进的基线上取得了实质性的改进,特别是在FB15K数据库上,N-1关系的头部实体预测Hit@10s和1-N关系的尾部实体预测分别提高了11.6%和17.9%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信