知识图表示的文本增强长期关系依赖学习

IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Quntao Zhu , Mengfan Li , Yuanjun Gao, Yao Wan, Xuanhua Shi, Hai Jin
{"title":"知识图表示的文本增强长期关系依赖学习","authors":"Quntao Zhu ,&nbsp;Mengfan Li ,&nbsp;Yuanjun Gao,&nbsp;Yao Wan,&nbsp;Xuanhua Shi,&nbsp;Hai Jin","doi":"10.1016/j.hcc.2025.100315","DOIUrl":null,"url":null,"abstract":"<div><div>Knowledge graph (KG) representation learning aims to map entities and relations into a low-dimensional representation space, showing significant potential in many tasks. Existing approaches follow two categories: (1) Graph-based approaches encode KG elements into vectors using structural score functions. (2) Text-based approaches embed text descriptions of entities and relations via pre-trained language models (PLMs), further fine-tuned with triples. We argue that graph-based approaches struggle with sparse data, while text-based approaches face challenges with complex relations. To address these limitations, we propose a unified Text-Augmented Attention-based Recurrent Network, bridging the gap between graph and natural language. Specifically, we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information, enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability. Besides, to effectively model multi-hop relations, we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information. Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100315"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Text-augmented long-term relation dependency learning for knowledge graph representation\",\"authors\":\"Quntao Zhu ,&nbsp;Mengfan Li ,&nbsp;Yuanjun Gao,&nbsp;Yao Wan,&nbsp;Xuanhua Shi,&nbsp;Hai Jin\",\"doi\":\"10.1016/j.hcc.2025.100315\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Knowledge graph (KG) representation learning aims to map entities and relations into a low-dimensional representation space, showing significant potential in many tasks. Existing approaches follow two categories: (1) Graph-based approaches encode KG elements into vectors using structural score functions. (2) Text-based approaches embed text descriptions of entities and relations via pre-trained language models (PLMs), further fine-tuned with triples. We argue that graph-based approaches struggle with sparse data, while text-based approaches face challenges with complex relations. To address these limitations, we propose a unified Text-Augmented Attention-based Recurrent Network, bridging the gap between graph and natural language. Specifically, we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information, enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability. Besides, to effectively model multi-hop relations, we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information. Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.</div></div>\",\"PeriodicalId\":100605,\"journal\":{\"name\":\"High-Confidence Computing\",\"volume\":\"5 4\",\"pages\":\"Article 100315\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"High-Confidence Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667295225000194\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667295225000194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

知识图表示学习旨在将实体和关系映射到低维表示空间中,在许多任务中显示出巨大的潜力。现有的方法分为两大类:(1)基于图的方法利用结构分数函数将KG元素编码为向量。(2)基于文本的方法通过预先训练的语言模型(PLMs)嵌入实体和关系的文本描述,并进一步使用三元组进行微调。我们认为基于图的方法难以处理稀疏数据,而基于文本的方法则面临复杂关系的挑战。为了解决这些限制,我们提出了一个统一的基于文本增强注意力的循环网络,弥合了图形和自然语言之间的差距。具体而言,我们采用基于局部影响权重的图关注网络来建模局部结构信息,并利用基于PLM的提示学习来学习文本信息,并通过基于全局影响权重和文本对比学习的掩模重建策略来增强鲁棒性和泛化性。此外,为了有效地建模多跳关系,我们提出了一种新的语义深度引导路径提取算法,并将交叉注意层集成到递归神经网络中,以促进长期关系依赖的学习,并提供对变长信息的自适应注意机制。大量的实验表明,我们的模型在KG完成和问答任务方面表现出优于现有模型的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Text-augmented long-term relation dependency learning for knowledge graph representation
Knowledge graph (KG) representation learning aims to map entities and relations into a low-dimensional representation space, showing significant potential in many tasks. Existing approaches follow two categories: (1) Graph-based approaches encode KG elements into vectors using structural score functions. (2) Text-based approaches embed text descriptions of entities and relations via pre-trained language models (PLMs), further fine-tuned with triples. We argue that graph-based approaches struggle with sparse data, while text-based approaches face challenges with complex relations. To address these limitations, we propose a unified Text-Augmented Attention-based Recurrent Network, bridging the gap between graph and natural language. Specifically, we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information, enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability. Besides, to effectively model multi-hop relations, we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information. Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信