LDM-KGC: A low-dimensional knowledge graph completion model based on multi-head attention mechanism

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Chaoqun Zhang , Bingjie Qiu , Weidong Tang , Bicheng Liang , Danyang Cui , Haisheng Luo , Qiming Chen
{"title":"LDM-KGC: A low-dimensional knowledge graph completion model based on multi-head attention mechanism","authors":"Chaoqun Zhang ,&nbsp;Bingjie Qiu ,&nbsp;Weidong Tang ,&nbsp;Bicheng Liang ,&nbsp;Danyang Cui ,&nbsp;Haisheng Luo ,&nbsp;Qiming Chen","doi":"10.1016/j.neucom.2025.130665","DOIUrl":null,"url":null,"abstract":"<div><div>Existing Transformer-based knowledge graph completion methods often rely on high-dimensional embeddings to achieve competitive performance, which to some extent limits their scalability on large-scale knowledge graphs. To address this challenge, the LDM-KGC model based on the multi-head attention mechanism is proposed. By combining QKV-layer and Update-layer, LDM-KGC can not only learn rich information but also reduce information loss during training, thereby achieving superior embedding representations in low-dimensional spaces. Specifically, QKV-layer utilizes the multi-head attention mechanism to effectively capture interactions between entities and relations, while Update-layer further refines the resulting embeddings. Experimental results on the FB15k-237 and WN18RR datasets demonstrate that LDM-KGC outperforms 14 baseline models, significantly improving mean reciprocal rank (MRR) by 12.4 percentage points and 24.4 percentage points over the worst baseline, respectively. Notably, LDM-KGC achieves MRR of 36.5%, Hits@1 of 27.1%, Hits@3 of 40.2%, and Hits@10 of 55.2% on the FB15k-237 dataset. Furthermore, LDM-KGC reaches a Hits@10 score of 65.2% on the NELL-995 dataset. These results underscore the effectiveness of LDM-KGC in generating low-dimensional embeddings, thereby offering a scalable solution for large-scale knowledge graph completion.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"649 ","pages":"Article 130665"},"PeriodicalIF":6.5000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225013372","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Existing Transformer-based knowledge graph completion methods often rely on high-dimensional embeddings to achieve competitive performance, which to some extent limits their scalability on large-scale knowledge graphs. To address this challenge, the LDM-KGC model based on the multi-head attention mechanism is proposed. By combining QKV-layer and Update-layer, LDM-KGC can not only learn rich information but also reduce information loss during training, thereby achieving superior embedding representations in low-dimensional spaces. Specifically, QKV-layer utilizes the multi-head attention mechanism to effectively capture interactions between entities and relations, while Update-layer further refines the resulting embeddings. Experimental results on the FB15k-237 and WN18RR datasets demonstrate that LDM-KGC outperforms 14 baseline models, significantly improving mean reciprocal rank (MRR) by 12.4 percentage points and 24.4 percentage points over the worst baseline, respectively. Notably, LDM-KGC achieves MRR of 36.5%, Hits@1 of 27.1%, Hits@3 of 40.2%, and Hits@10 of 55.2% on the FB15k-237 dataset. Furthermore, LDM-KGC reaches a Hits@10 score of 65.2% on the NELL-995 dataset. These results underscore the effectiveness of LDM-KGC in generating low-dimensional embeddings, thereby offering a scalable solution for large-scale knowledge graph completion.
LDM-KGC:基于多头注意机制的低维知识图补全模型
现有的基于transformer的知识图补全方法往往依赖于高维嵌入来获得具有竞争力的性能,这在一定程度上限制了它们在大规模知识图上的可扩展性。针对这一挑战,提出了基于多头注意机制的LDM-KGC模型。LDM-KGC通过结合QKV-layer和Update-layer,既能学习到丰富的信息,又能减少训练过程中的信息丢失,从而在低维空间中实现优越的嵌入表示。具体而言,qkv层利用多头关注机制有效捕获实体和关系之间的交互,而更新层进一步细化了结果嵌入。在FB15k-237和WN18RR数据集上的实验结果表明,LDM-KGC优于14个基线模型,比最差基线分别显著提高12.4个百分点和24.4个百分点的平均互反秩(MRR)。值得注意的是,LDM-KGC在FB15k-237数据集上的MRR分别为36.5%、Hits@1 27.1%、Hits@3 40.2%和Hits@10 55.2%。此外,LDM-KGC在NELL-995数据集上的得分Hits@10达到65.2%。这些结果强调了LDM-KGC在生成低维嵌入方面的有效性,从而为大规模知识图补全提供了可扩展的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信