Construction of Question Answering System Based on English Pre-Trained Language Model Enhanced by Knowledge Graph

Pei Song
{"title":"Construction of Question Answering System Based on English Pre-Trained Language Model Enhanced by Knowledge Graph","authors":"Pei Song","doi":"10.1016/j.procs.2025.04.256","DOIUrl":null,"url":null,"abstract":"<div><div>With the continuous development of technology, question answering systems based on pre trained language models have become an important component of intelligent applications. However, traditional question-answering systems often face challenges in terms of understanding accuracy and insufficient knowledge coverage when dealing with open-domain questions. To this end, this paper uses a method for building a question-answering system based on a knowledge graph-enhanced BERT pre-trained language model. First, this paper trains the basic model based on a large-scale BERT pre-trained language model. Then, this paper adopts knowledge graph technology to introduce structured knowledge information into the model by integrating domain-specific knowledge bases. Finally, in order to effectively integrate knowledge graph information, this paper uses graph neural network (GNN) to model graph data, and combines the self-attention mechanism in the BERT model to optimize the weighted fusion process of knowledge graph information. Experimental results show that the BERT model enhanced with knowledge graph performs well in multiple question-answering tasks. On the simple question of the first experiment, the average accuracy of the enhanced model increased from 84.6% of the standard BERT model to 89.8%, and the F1 score increased from 0.86 to 0.91. In complex reasoning tasks, the BERT+knowledge graph model demonstrates stronger reasoning ability and higher knowledge coverage. In the experimental conclusion, the introduction of the knowledge graph significantly improves the model’s reasoning ability and knowledge coverage, especially in professional field problems and multi-step reasoning tasks, the enhanced model shows stronger capabilities. This research provides a new construction method for question-answering systems, demonstrates the great potential of knowledge graphs in natural language processing, and has broad application prospects.</div></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"261 ","pages":"Pages 647-655"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedia Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877050925013584","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the continuous development of technology, question answering systems based on pre trained language models have become an important component of intelligent applications. However, traditional question-answering systems often face challenges in terms of understanding accuracy and insufficient knowledge coverage when dealing with open-domain questions. To this end, this paper uses a method for building a question-answering system based on a knowledge graph-enhanced BERT pre-trained language model. First, this paper trains the basic model based on a large-scale BERT pre-trained language model. Then, this paper adopts knowledge graph technology to introduce structured knowledge information into the model by integrating domain-specific knowledge bases. Finally, in order to effectively integrate knowledge graph information, this paper uses graph neural network (GNN) to model graph data, and combines the self-attention mechanism in the BERT model to optimize the weighted fusion process of knowledge graph information. Experimental results show that the BERT model enhanced with knowledge graph performs well in multiple question-answering tasks. On the simple question of the first experiment, the average accuracy of the enhanced model increased from 84.6% of the standard BERT model to 89.8%, and the F1 score increased from 0.86 to 0.91. In complex reasoning tasks, the BERT+knowledge graph model demonstrates stronger reasoning ability and higher knowledge coverage. In the experimental conclusion, the introduction of the knowledge graph significantly improves the model’s reasoning ability and knowledge coverage, especially in professional field problems and multi-step reasoning tasks, the enhanced model shows stronger capabilities. This research provides a new construction method for question-answering systems, demonstrates the great potential of knowledge graphs in natural language processing, and has broad application prospects.
基于知识图增强的英语预训练语言模型的问答系统构建
随着技术的不断发展,基于预训练语言模型的问答系统已经成为智能应用的重要组成部分。然而,传统的问答系统在处理开放领域问题时往往面临理解准确性和知识覆盖不足的挑战。为此,本文采用了一种基于知识图增强的BERT预训练语言模型构建问答系统的方法。首先,本文基于大规模BERT预训练语言模型对基本模型进行训练。然后,通过集成领域知识库,采用知识图技术将结构化的知识信息引入到模型中。最后,为了有效整合知识图信息,利用图神经网络(GNN)对图数据进行建模,并结合BERT模型中的自关注机制对知识图信息的加权融合过程进行优化。实验结果表明,经知识图增强的BERT模型在多个问答任务中表现良好。在第一个实验的简单问题上,增强模型的平均准确率从标准BERT模型的84.6%提高到89.8%,F1分数从0.86提高到0.91。在复杂的推理任务中,BERT+知识图模型表现出更强的推理能力和更高的知识覆盖率。在实验结论中,知识图的引入显著提高了模型的推理能力和知识覆盖率,特别是在专业领域问题和多步推理任务中,增强后的模型表现出更强的能力。本研究为问答系统的构建提供了一种新的方法,展示了知识图在自然语言处理中的巨大潜力,具有广阔的应用前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信