Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective

IF 2.1 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ernests Lavrinovics , Russa Biswas , Johannes Bjerva , Katja Hose
{"title":"Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective","authors":"Ernests Lavrinovics ,&nbsp;Russa Biswas ,&nbsp;Johannes Bjerva ,&nbsp;Katja Hose","doi":"10.1016/j.websem.2024.100844","DOIUrl":null,"url":null,"abstract":"<div><div>Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, chatbots, and others. However, they face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses. This undermines trust and limits the applicability of LLMs in different domains. Knowledge Graphs (KGs), on the other hand, provide a structured collection of interconnected facts represented as entities (nodes) and their relationships (edges). In recent research, KGs have been leveraged to provide context that can fill gaps in an LLM’s understanding of certain topics offering a promising approach to mitigate hallucinations in LLMs, enhancing their reliability and accuracy while benefiting from their wide applicability. Nonetheless, it is still a very active area of research with various unresolved open problems. In this paper, we discuss these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations. In our discussion, we consider the current use of KGs in LLM systems and identify future directions within each of these challenges.</div></div>","PeriodicalId":49951,"journal":{"name":"Journal of Web Semantics","volume":"85 ","pages":"Article 100844"},"PeriodicalIF":2.1000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Web Semantics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570826824000301","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, chatbots, and others. However, they face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses. This undermines trust and limits the applicability of LLMs in different domains. Knowledge Graphs (KGs), on the other hand, provide a structured collection of interconnected facts represented as entities (nodes) and their relationships (edges). In recent research, KGs have been leveraged to provide context that can fill gaps in an LLM’s understanding of certain topics offering a promising approach to mitigate hallucinations in LLMs, enhancing their reliability and accuracy while benefiting from their wide applicability. Nonetheless, it is still a very active area of research with various unresolved open problems. In this paper, we discuss these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations. In our discussion, we consider the current use of KGs in LLM systems and identify future directions within each of these challenges.
知识图谱、大型语言模型和幻觉:一个NLP的视角
大型语言模型(llm)已经彻底改变了基于自然语言处理(NLP)的应用程序,包括自动文本生成、问答、聊天机器人等。然而,他们面临着一个重大挑战:幻觉,即模型产生看似合理但实际上不正确的反应。这破坏了信任,限制了法学硕士在不同领域的适用性。另一方面,知识图(Knowledge Graphs, KGs)提供了一个相互关联的事实的结构化集合,表示为实体(节点)及其关系(边)。在最近的研究中,KGs已被利用来提供背景,可以填补法学硕士对某些主题的理解空白,提供了一种有希望的方法来减轻法学硕士的幻觉,提高其可靠性和准确性,同时受益于其广泛的适用性。尽管如此,它仍然是一个非常活跃的研究领域,有各种尚未解决的开放问题。在本文中,我们讨论了这些开放的挑战,涵盖了最先进的数据集和基准,以及知识整合和评估幻觉的方法。在我们的讨论中,我们考虑了目前KGs在LLM系统中的使用情况,并确定了这些挑战的未来方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Web Semantics
Journal of Web Semantics 工程技术-计算机:人工智能
CiteScore
6.20
自引率
12.00%
发文量
22
审稿时长
14.6 weeks
期刊介绍: The Journal of Web Semantics is an interdisciplinary journal based on research and applications of various subject areas that contribute to the development of a knowledge-intensive and intelligent service Web. These areas include: knowledge technologies, ontology, agents, databases and the semantic grid, obviously disciplines like information retrieval, language technology, human-computer interaction and knowledge discovery are of major relevance as well. All aspects of the Semantic Web development are covered. The publication of large-scale experiments and their analysis is also encouraged to clearly illustrate scenarios and methods that introduce semantics into existing Web interfaces, contents and services. The journal emphasizes the publication of papers that combine theories, methods and experiments from different subject areas in order to deliver innovative semantic methods and applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信