探索知识图嵌入方法的运行时性能

Angelica Sofia Valeriani, Guido Walter Di Donato, M. Santambrogio
{"title":"探索知识图嵌入方法的运行时性能","authors":"Angelica Sofia Valeriani, Guido Walter Di Donato, M. Santambrogio","doi":"10.1109/rtsi50628.2021.9597228","DOIUrl":null,"url":null,"abstract":"In recent years, Knowledge Graphs (KGs) have become ubiquitous, powering recommendation systems, natural language processing, and query answering, among others. Moreover, representation learning on graphs has reached unprecedentedly effective graph mining. In particular, Knowledge Graph Embedding (KGE) methods have gained increasing attention due to their effectiveness in representing real-world structured information while preserving relevant properties. Current research mainly focuses on improving and comparing the effectiveness of new KGE models on different predictive tasks. However, the application of KGE techniques in the industrial scenario sets a series of requirements on the runtime performance of the employed models. For this reason, this work aims to enable an effortless characterization of the runtime performance of KGE methods in terms of memory footprint and execution time. To this extent, we propose KGE-Perf, a framework for evaluating available state-of-the-art implementations of KGE models against graphs with different properties, focusing on the efficacy of the adopted optimization strategies. Experimental evaluation of three representative KGE algorithms on open-access KGs shows that multi-threading on CPU is effective, but its benefits decrease as the number of threads grows. The usage of vectorized instruction shows encouraging results in speeding up the training of KGE models, but GPU proves, hands down, to be the best architecture for the given task. Moreover, experimental results show how the RAM usage strongly depends on the input KG, with only slight variations between different models or hardware configurations.","PeriodicalId":294628,"journal":{"name":"2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Exploring the Runtime Performance of Knowledge Graph Embedding Methods\",\"authors\":\"Angelica Sofia Valeriani, Guido Walter Di Donato, M. Santambrogio\",\"doi\":\"10.1109/rtsi50628.2021.9597228\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, Knowledge Graphs (KGs) have become ubiquitous, powering recommendation systems, natural language processing, and query answering, among others. Moreover, representation learning on graphs has reached unprecedentedly effective graph mining. In particular, Knowledge Graph Embedding (KGE) methods have gained increasing attention due to their effectiveness in representing real-world structured information while preserving relevant properties. Current research mainly focuses on improving and comparing the effectiveness of new KGE models on different predictive tasks. However, the application of KGE techniques in the industrial scenario sets a series of requirements on the runtime performance of the employed models. For this reason, this work aims to enable an effortless characterization of the runtime performance of KGE methods in terms of memory footprint and execution time. To this extent, we propose KGE-Perf, a framework for evaluating available state-of-the-art implementations of KGE models against graphs with different properties, focusing on the efficacy of the adopted optimization strategies. Experimental evaluation of three representative KGE algorithms on open-access KGs shows that multi-threading on CPU is effective, but its benefits decrease as the number of threads grows. The usage of vectorized instruction shows encouraging results in speeding up the training of KGE models, but GPU proves, hands down, to be the best architecture for the given task. Moreover, experimental results show how the RAM usage strongly depends on the input KG, with only slight variations between different models or hardware configurations.\",\"PeriodicalId\":294628,\"journal\":{\"name\":\"2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI)\",\"volume\":\"115 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/rtsi50628.2021.9597228\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/rtsi50628.2021.9597228","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

近年来,知识图(Knowledge Graphs, KGs)已经变得无处不在,为推荐系统、自然语言处理和查询回答等提供了动力。此外,图的表示学习已经达到了前所未有的有效的图挖掘。特别是知识图嵌入(KGE)方法,由于其在保留相关属性的同时有效地表示现实世界的结构化信息而受到越来越多的关注。目前的研究主要集中在改进和比较新的KGE模型在不同预测任务上的有效性。然而,KGE技术在工业场景中的应用对所采用模型的运行时性能提出了一系列要求。出于这个原因,这项工作的目的是在内存占用和执行时间方面轻松地描述KGE方法的运行时性能。为此,我们提出了KGE- perf,这是一个框架,用于针对具有不同属性的图评估可用的最先进的KGE模型实现,重点关注所采用的优化策略的有效性。在开放存取KGs上对三种代表性的KGE算法进行了实验评估,结果表明CPU上的多线程是有效的,但随着线程数的增加,其优势会降低。矢量化指令的使用在加速KGE模型的训练方面显示出令人鼓舞的结果,但GPU证明,对于给定的任务,它是最好的架构。此外,实验结果表明,RAM使用在很大程度上取决于输入KG,在不同的模型或硬件配置之间只有轻微的变化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploring the Runtime Performance of Knowledge Graph Embedding Methods
In recent years, Knowledge Graphs (KGs) have become ubiquitous, powering recommendation systems, natural language processing, and query answering, among others. Moreover, representation learning on graphs has reached unprecedentedly effective graph mining. In particular, Knowledge Graph Embedding (KGE) methods have gained increasing attention due to their effectiveness in representing real-world structured information while preserving relevant properties. Current research mainly focuses on improving and comparing the effectiveness of new KGE models on different predictive tasks. However, the application of KGE techniques in the industrial scenario sets a series of requirements on the runtime performance of the employed models. For this reason, this work aims to enable an effortless characterization of the runtime performance of KGE methods in terms of memory footprint and execution time. To this extent, we propose KGE-Perf, a framework for evaluating available state-of-the-art implementations of KGE models against graphs with different properties, focusing on the efficacy of the adopted optimization strategies. Experimental evaluation of three representative KGE algorithms on open-access KGs shows that multi-threading on CPU is effective, but its benefits decrease as the number of threads grows. The usage of vectorized instruction shows encouraging results in speeding up the training of KGE models, but GPU proves, hands down, to be the best architecture for the given task. Moreover, experimental results show how the RAM usage strongly depends on the input KG, with only slight variations between different models or hardware configurations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信