Electrical Equipment Knowledge Graph Embedding Using Language Model with Self-learned Prompts

Q4 Engineering
Hong Yang, Xiaokai Meng, Hua Yu, Yang Bai, Yu Han, Yongxin Liu
{"title":"Electrical Equipment Knowledge Graph Embedding Using Language Model with Self-learned Prompts","authors":"Hong Yang, Xiaokai Meng, Hua Yu, Yang Bai, Yu Han, Yongxin Liu","doi":"10.1142/s0129156424400500","DOIUrl":null,"url":null,"abstract":"In recent years, knowledge graphs have had a significant impact across diverse domains. Notably, the power knowledge graph has garnered considerable attention as high-performance database. However, its untapped reasoning capabilities offer an enticing avenue for exploration. One of the main reasons is the sparsity of power grid datasets, especially electrical equipment knowledge graph. Because of the scarcity of high-risk records, there exists a large number of long tail entities and long tail relations. To address this challenge, we introduce a novel text-based model called GELMSP (Graph Embedding using Language Model with Self-learned Prompts). We employ a bi-encoder structure along with a contrastive learning strategy to expedite the training process. Additionally, our approach incorporates a self-learned prompt mechanism that generates prompts for specific situations without the need for any additional information, known as self-learning. This harnesses the power of pre-trained language models to comprehend the semantic nuances within the entities and relationships of the knowledge graph. Adopting this innovative method enables our model to effectively handle sparse datasets, leading to a comprehensive understanding of interconnectedness within the knowledge graph. Additionally, we demonstrate the efficacy of our model through extensive experiments and comparisons against baseline methods, reaffirming its potential in advancing the state-of-the-art in electrical equipment defect diagnosis.","PeriodicalId":35778,"journal":{"name":"International Journal of High Speed Electronics and Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of High Speed Electronics and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0129156424400500","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, knowledge graphs have had a significant impact across diverse domains. Notably, the power knowledge graph has garnered considerable attention as high-performance database. However, its untapped reasoning capabilities offer an enticing avenue for exploration. One of the main reasons is the sparsity of power grid datasets, especially electrical equipment knowledge graph. Because of the scarcity of high-risk records, there exists a large number of long tail entities and long tail relations. To address this challenge, we introduce a novel text-based model called GELMSP (Graph Embedding using Language Model with Self-learned Prompts). We employ a bi-encoder structure along with a contrastive learning strategy to expedite the training process. Additionally, our approach incorporates a self-learned prompt mechanism that generates prompts for specific situations without the need for any additional information, known as self-learning. This harnesses the power of pre-trained language models to comprehend the semantic nuances within the entities and relationships of the knowledge graph. Adopting this innovative method enables our model to effectively handle sparse datasets, leading to a comprehensive understanding of interconnectedness within the knowledge graph. Additionally, we demonstrate the efficacy of our model through extensive experiments and comparisons against baseline methods, reaffirming its potential in advancing the state-of-the-art in electrical equipment defect diagnosis.
利用语言模型和自学提示嵌入电气设备知识图谱
近年来,知识图谱在各个领域都产生了重大影响。值得注意的是,作为高性能数据库,幂知识图谱获得了相当多的关注。然而,其尚未开发的推理能力为探索提供了诱人的途径。其中一个主要原因是电网数据集的稀缺性,尤其是电气设备知识图谱。由于高风险记录稀少,因此存在大量长尾实体和长尾关系。为了应对这一挑战,我们引入了一种新颖的基于文本的模型,称为 GELMSP(使用自学习提示的语言模型进行图形嵌入)。我们采用双编码器结构和对比学习策略来加快训练过程。此外,我们的方法还采用了自学提示机制,无需任何额外信息即可针对特定情况生成提示,即所谓的自学。这就利用了预先训练好的语言模型的力量,以理解知识图谱实体和关系中的语义细微差别。采用这种创新方法使我们的模型能够有效处理稀疏的数据集,从而全面了解知识图谱中的内在联系。此外,我们还通过大量实验和与基线方法的比较,证明了我们的模型的有效性,再次证实了它在推进电气设备缺陷诊断领域最先进技术方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of High Speed Electronics and Systems
International Journal of High Speed Electronics and Systems Engineering-Electrical and Electronic Engineering
CiteScore
0.60
自引率
0.00%
发文量
22
期刊介绍: Launched in 1990, the International Journal of High Speed Electronics and Systems (IJHSES) has served graduate students and those in R&D, managerial and marketing positions by giving state-of-the-art data, and the latest research trends. Its main charter is to promote engineering education by advancing interdisciplinary science between electronics and systems and to explore high speed technology in photonics and electronics. IJHSES, a quarterly journal, continues to feature a broad coverage of topics relating to high speed or high performance devices, circuits and systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信