{"title":"Understanding Knowledge Drift in LLMs through Misinformation","authors":"Alina Fastowski, Gjergji Kasneci","doi":"arxiv-2409.07085","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have revolutionized numerous applications,\nmaking them an integral part of our digital ecosystem. However, their\nreliability becomes critical, especially when these models are exposed to\nmisinformation. We primarily analyze the susceptibility of state-of-the-art\nLLMs to factual inaccuracies when they encounter false information in a QnA\nscenario, an issue that can lead to a phenomenon we refer to as *knowledge\ndrift*, which significantly undermines the trustworthiness of these models. We\nevaluate the factuality and the uncertainty of the models' responses relying on\nEntropy, Perplexity, and Token Probability metrics. Our experiments reveal that\nan LLM's uncertainty can increase up to 56.6% when the question is answered\nincorrectly due to the exposure to false information. At the same time,\nrepeated exposure to the same false information can decrease the models\nuncertainty again (-52.8% w.r.t. the answers on the untainted prompts),\npotentially manipulating the underlying model's beliefs and introducing a drift\nfrom its original knowledge. These findings provide insights into LLMs'\nrobustness and vulnerability to adversarial inputs, paving the way for\ndeveloping more reliable LLM applications across various domains. The code is\navailable at https://github.com/afastowski/knowledge_drift.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"24 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large Language Models (LLMs) have revolutionized numerous applications,
making them an integral part of our digital ecosystem. However, their
reliability becomes critical, especially when these models are exposed to
misinformation. We primarily analyze the susceptibility of state-of-the-art
LLMs to factual inaccuracies when they encounter false information in a QnA
scenario, an issue that can lead to a phenomenon we refer to as *knowledge
drift*, which significantly undermines the trustworthiness of these models. We
evaluate the factuality and the uncertainty of the models' responses relying on
Entropy, Perplexity, and Token Probability metrics. Our experiments reveal that
an LLM's uncertainty can increase up to 56.6% when the question is answered
incorrectly due to the exposure to false information. At the same time,
repeated exposure to the same false information can decrease the models
uncertainty again (-52.8% w.r.t. the answers on the untainted prompts),
potentially manipulating the underlying model's beliefs and introducing a drift
from its original knowledge. These findings provide insights into LLMs'
robustness and vulnerability to adversarial inputs, paving the way for
developing more reliable LLM applications across various domains. The code is
available at https://github.com/afastowski/knowledge_drift.