An empirical study of pre-trained language models in simple knowledge graph question answering

IF 2.7 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z Pan, Zafar Ali
{"title":"An empirical study of pre-trained language models in simple knowledge graph question answering","authors":"Nan Hu, Yike Wu, Guilin Qi, Dehai Min, Jiaoyan Chen, Jeff Z Pan, Zafar Ali","doi":"10.1007/s11280-023-01166-y","DOIUrl":null,"url":null,"abstract":"Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT ( https://chat.openai.com/ ), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA ( https://github.com/aannonymouuss/PLMs-in-Practical-KBQA ).","PeriodicalId":49356,"journal":{"name":"World Wide Web-Internet and Web Information Systems","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Wide Web-Internet and Web Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11280-023-01166-y","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 7

Abstract

Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT ( https://chat.openai.com/ ), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA ( https://github.com/aannonymouuss/PLMs-in-Practical-KBQA ).

Abstract Image

简单知识图问答中预训练语言模型的实证研究
像BERT这样的大规模预训练语言模型(plm)最近取得了巨大的成功,成为自然语言处理(NLP)的一个里程碑。现在NLP社区的共识是采用plm作为下游任务的骨干。在最近关于知识图谱问答(KGQA)的研究中,BERT或其变体在他们的KGQA模型中已经成为必要。然而,对于不同PLMs在KGQA中的性能,目前还缺乏全面的研究和比较。为此,我们总结了两种基于plm的基本KGQA框架,没有额外的神经网络模块,以比较9种plm在准确性和效率方面的性能。此外,我们在流行的SimpleQuestions基准测试的基础上提出了三个大规模kg的基准测试,以研究plm的可扩展性。我们仔细分析了所有基于PLMs的KGQA基本框架在这些基准和另外两个流行的数据集(WebQuestionSP和FreebaseQA)上的结果,发现PLMs中的知识蒸馏技术和知识增强方法对KGQA很有前景。此外,我们测试了ChatGPT (https://chat.openai.com/),它在NLP社区中引起了很大的关注,展示了它在零射击KGQA中的令人印象深刻的功能和局限性。我们已经发布了代码和基准测试,以促进在KGQA (https://github.com/aannonymouuss/PLMs-in-Practical-KBQA)上使用plm。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
World Wide Web-Internet and Web Information Systems
World Wide Web-Internet and Web Information Systems 工程技术-计算机:软件工程
CiteScore
7.30
自引率
10.80%
发文量
131
审稿时长
6 months
期刊介绍: World Wide Web: Internet and Web Information Systems (WWW) is an international, archival, peer-reviewed journal which covers all aspects of the World Wide Web, including issues related to architectures, applications, Internet and Web information systems, and communities. The purpose of this journal is to provide an international forum for researchers, professionals, and industrial practitioners to share their rapidly developing knowledge and report on new advances in Internet and web-based systems. The journal also focuses on all database- and information-system topics that relate to the Internet and the Web, particularly on ways to model, design, develop, integrate, and manage these systems. Appearing quarterly, the journal publishes (1) papers describing original ideas and new results, (2) vision papers, (3) reviews of important techniques in related areas, (4) innovative application papers, and (5) progress reports on major international research projects. Papers published in the WWW journal deal with subjects directly or indirectly related to the World Wide Web. The WWW journal provides timely, in-depth coverage of the most recent developments in the World Wide Web discipline to enable anyone involved to keep up-to-date with this dynamically changing technology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信