There is No Big Brother or Small Brother:Knowledge Infusion in Language Models for Link Prediction and Question Answering

Q3 Arts and Humanities
Icon Pub Date : 2023-01-10 DOI:10.48550/arXiv.2301.04013
Ankush Agarwal, Sakharam Gawade, Sachin Channabasavarajendra, P. Bhattacharyya
{"title":"There is No Big Brother or Small Brother:Knowledge Infusion in Language Models for Link Prediction and Question Answering","authors":"Ankush Agarwal, Sakharam Gawade, Sachin Channabasavarajendra, P. Bhattacharyya","doi":"10.48550/arXiv.2301.04013","DOIUrl":null,"url":null,"abstract":"The integration of knowledge graphs with deep learning is thriving in improving the performance of various natural language processing (NLP) tasks. In this paper, we focus on knowledge-infused link prediction and question answering using language models, T5, and BLOOM across three domains:Aviation, Movie, and Web. In this context, we infuse knowledge in large and small language models and study their performance, and find the performance to be similar. For the link prediction task on the Aviation Knowledge Graph, we obtain a 0.2 hits@1 score using T5-small, T5-base, T5-large, and BLOOM. Using template-based scripts, we create a set of 1 million synthetic factoid QA pairs in the aviation domain from National Transportation Safety Board (NTSB) reports. On our curated QA pairs, the three models of T5 achieve a 0.7 hits@1 score. We validate our findings with the paired student t test and Cohen’s kappa scores. For link prediction on Aviation Knowledge Graph using T5-small and T5-large, we obtain a Cohen’s kappa score of 0.76, showing substantial agreement between the models. Thus, we infer that small language models perform similar to large language models with the infusion of knowledge.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Icon","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2301.04013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 3

Abstract

The integration of knowledge graphs with deep learning is thriving in improving the performance of various natural language processing (NLP) tasks. In this paper, we focus on knowledge-infused link prediction and question answering using language models, T5, and BLOOM across three domains:Aviation, Movie, and Web. In this context, we infuse knowledge in large and small language models and study their performance, and find the performance to be similar. For the link prediction task on the Aviation Knowledge Graph, we obtain a 0.2 hits@1 score using T5-small, T5-base, T5-large, and BLOOM. Using template-based scripts, we create a set of 1 million synthetic factoid QA pairs in the aviation domain from National Transportation Safety Board (NTSB) reports. On our curated QA pairs, the three models of T5 achieve a 0.7 hits@1 score. We validate our findings with the paired student t test and Cohen’s kappa scores. For link prediction on Aviation Knowledge Graph using T5-small and T5-large, we obtain a Cohen’s kappa score of 0.76, showing substantial agreement between the models. Thus, we infer that small language models perform similar to large language models with the infusion of knowledge.
没有老大哥或小弟:链接预测和问答语言模型中的知识注入
知识图与深度学习的集成在提高各种自然语言处理(NLP)任务的性能方面正在蓬勃发展。在本文中,我们重点研究了在航空、电影和网络三个领域中使用语言模型T5和BLOOM进行的知识注入的链接预测和问题回答。在这种情况下,我们将知识注入大的和小的语言模型中,并研究它们的性能,发现它们的性能是相似的。对于航空知识图上的链接预测任务,我们获得了0.2hits@1使用T5小、T5基础、T5大和BLOOM得分。使用基于模板的脚本,我们根据美国国家运输安全委员会(NTSB)的报告,在航空领域创建了一组100万个合成事实QA对。在我们精心策划的QA配对中,T5的三款车型获得了0.7hits@1分数我们用配对学生t检验和Cohen的kappa分数来验证我们的发现。对于使用T5小和T5大的航空知识图上的链接预测,我们获得了0.76的Cohen’s kappa分数,显示了模型之间的基本一致性。因此,我们推断,随着知识的注入,小语言模型的表现与大语言模型相似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Icon
Icon Arts and Humanities-History and Philosophy of Science
CiteScore
0.30
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信