Chaoqun Zhang , Bingjie Qiu , Weidong Tang , Bicheng Liang , Danyang Cui , Haisheng Luo , Qiming Chen
{"title":"LDM-KGC: A low-dimensional knowledge graph completion model based on multi-head attention mechanism","authors":"Chaoqun Zhang , Bingjie Qiu , Weidong Tang , Bicheng Liang , Danyang Cui , Haisheng Luo , Qiming Chen","doi":"10.1016/j.neucom.2025.130665","DOIUrl":null,"url":null,"abstract":"<div><div>Existing Transformer-based knowledge graph completion methods often rely on high-dimensional embeddings to achieve competitive performance, which to some extent limits their scalability on large-scale knowledge graphs. To address this challenge, the LDM-KGC model based on the multi-head attention mechanism is proposed. By combining QKV-layer and Update-layer, LDM-KGC can not only learn rich information but also reduce information loss during training, thereby achieving superior embedding representations in low-dimensional spaces. Specifically, QKV-layer utilizes the multi-head attention mechanism to effectively capture interactions between entities and relations, while Update-layer further refines the resulting embeddings. Experimental results on the FB15k-237 and WN18RR datasets demonstrate that LDM-KGC outperforms 14 baseline models, significantly improving mean reciprocal rank (MRR) by 12.4 percentage points and 24.4 percentage points over the worst baseline, respectively. Notably, LDM-KGC achieves MRR of 36.5%, Hits@1 of 27.1%, Hits@3 of 40.2%, and Hits@10 of 55.2% on the FB15k-237 dataset. Furthermore, LDM-KGC reaches a Hits@10 score of 65.2% on the NELL-995 dataset. These results underscore the effectiveness of LDM-KGC in generating low-dimensional embeddings, thereby offering a scalable solution for large-scale knowledge graph completion.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"649 ","pages":"Article 130665"},"PeriodicalIF":6.5000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225013372","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Existing Transformer-based knowledge graph completion methods often rely on high-dimensional embeddings to achieve competitive performance, which to some extent limits their scalability on large-scale knowledge graphs. To address this challenge, the LDM-KGC model based on the multi-head attention mechanism is proposed. By combining QKV-layer and Update-layer, LDM-KGC can not only learn rich information but also reduce information loss during training, thereby achieving superior embedding representations in low-dimensional spaces. Specifically, QKV-layer utilizes the multi-head attention mechanism to effectively capture interactions between entities and relations, while Update-layer further refines the resulting embeddings. Experimental results on the FB15k-237 and WN18RR datasets demonstrate that LDM-KGC outperforms 14 baseline models, significantly improving mean reciprocal rank (MRR) by 12.4 percentage points and 24.4 percentage points over the worst baseline, respectively. Notably, LDM-KGC achieves MRR of 36.5%, Hits@1 of 27.1%, Hits@3 of 40.2%, and Hits@10 of 55.2% on the FB15k-237 dataset. Furthermore, LDM-KGC reaches a Hits@10 score of 65.2% on the NELL-995 dataset. These results underscore the effectiveness of LDM-KGC in generating low-dimensional embeddings, thereby offering a scalable solution for large-scale knowledge graph completion.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.