通过错误信息了解法律硕士的知识漂移

Alina Fastowski, Gjergji Kasneci
{"title":"通过错误信息了解法律硕士的知识漂移","authors":"Alina Fastowski, Gjergji Kasneci","doi":"arxiv-2409.07085","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have revolutionized numerous applications,\nmaking them an integral part of our digital ecosystem. However, their\nreliability becomes critical, especially when these models are exposed to\nmisinformation. We primarily analyze the susceptibility of state-of-the-art\nLLMs to factual inaccuracies when they encounter false information in a QnA\nscenario, an issue that can lead to a phenomenon we refer to as *knowledge\ndrift*, which significantly undermines the trustworthiness of these models. We\nevaluate the factuality and the uncertainty of the models' responses relying on\nEntropy, Perplexity, and Token Probability metrics. Our experiments reveal that\nan LLM's uncertainty can increase up to 56.6% when the question is answered\nincorrectly due to the exposure to false information. At the same time,\nrepeated exposure to the same false information can decrease the models\nuncertainty again (-52.8% w.r.t. the answers on the untainted prompts),\npotentially manipulating the underlying model's beliefs and introducing a drift\nfrom its original knowledge. These findings provide insights into LLMs'\nrobustness and vulnerability to adversarial inputs, paving the way for\ndeveloping more reliable LLM applications across various domains. The code is\navailable at https://github.com/afastowski/knowledge_drift.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Understanding Knowledge Drift in LLMs through Misinformation\",\"authors\":\"Alina Fastowski, Gjergji Kasneci\",\"doi\":\"arxiv-2409.07085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLMs) have revolutionized numerous applications,\\nmaking them an integral part of our digital ecosystem. However, their\\nreliability becomes critical, especially when these models are exposed to\\nmisinformation. We primarily analyze the susceptibility of state-of-the-art\\nLLMs to factual inaccuracies when they encounter false information in a QnA\\nscenario, an issue that can lead to a phenomenon we refer to as *knowledge\\ndrift*, which significantly undermines the trustworthiness of these models. We\\nevaluate the factuality and the uncertainty of the models' responses relying on\\nEntropy, Perplexity, and Token Probability metrics. Our experiments reveal that\\nan LLM's uncertainty can increase up to 56.6% when the question is answered\\nincorrectly due to the exposure to false information. At the same time,\\nrepeated exposure to the same false information can decrease the models\\nuncertainty again (-52.8% w.r.t. the answers on the untainted prompts),\\npotentially manipulating the underlying model's beliefs and introducing a drift\\nfrom its original knowledge. These findings provide insights into LLMs'\\nrobustness and vulnerability to adversarial inputs, paving the way for\\ndeveloping more reliable LLM applications across various domains. The code is\\navailable at https://github.com/afastowski/knowledge_drift.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLM)给众多应用带来了变革,使其成为我们数字生态系统不可或缺的一部分。然而,它们的可靠性变得至关重要,尤其是当这些模型暴露在错误信息中时。我们主要分析了最先进的LLM在遇到QnAscenario中的虚假信息时对事实不准确性的易感性,这个问题可能会导致我们称之为 "已知漂移"(*knowledgedrift*)的现象,从而严重破坏这些模型的可信度。我们通过熵(Entropy)、复杂度(Perplexity)和令牌概率(Token Probability)指标来评估模型响应的事实性和不确定性。我们的实验表明,当暴露于虚假信息而导致问题回答错误时,LLM 的不确定性会增加高达 56.6%。与此同时,重复暴露于相同的虚假信息会再次降低模型的不确定性(与未受污染的提示答案相比为-52.8%),这可能会操纵底层模型的信念,使其偏离原有知识。这些发现深入揭示了 LLM 的鲁棒性和易受对抗性输入影响的脆弱性,为在各个领域开发更可靠的 LLM 应用铺平了道路。代码见 https://github.com/afastowski/knowledge_drift。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Understanding Knowledge Drift in LLMs through Misinformation
Large Language Models (LLMs) have revolutionized numerous applications, making them an integral part of our digital ecosystem. However, their reliability becomes critical, especially when these models are exposed to misinformation. We primarily analyze the susceptibility of state-of-the-art LLMs to factual inaccuracies when they encounter false information in a QnA scenario, an issue that can lead to a phenomenon we refer to as *knowledge drift*, which significantly undermines the trustworthiness of these models. We evaluate the factuality and the uncertainty of the models' responses relying on Entropy, Perplexity, and Token Probability metrics. Our experiments reveal that an LLM's uncertainty can increase up to 56.6% when the question is answered incorrectly due to the exposure to false information. At the same time, repeated exposure to the same false information can decrease the models uncertainty again (-52.8% w.r.t. the answers on the untainted prompts), potentially manipulating the underlying model's beliefs and introducing a drift from its original knowledge. These findings provide insights into LLMs' robustness and vulnerability to adversarial inputs, paving the way for developing more reliable LLM applications across various domains. The code is available at https://github.com/afastowski/knowledge_drift.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信