Artificial intelligence—friend or foe in fake news campaigns

IF 1.2 Q3 ECONOMICS
Krzysztof Węcel, Marcin Sawiński, Milena Stróżyna, Włodzimierz Lewoniewski, Ewelina Księżniak, P. Stolarski, W. Abramowicz
{"title":"Artificial intelligence—friend or foe in fake news campaigns","authors":"Krzysztof Węcel, Marcin Sawiński, Milena Stróżyna, Włodzimierz Lewoniewski, Ewelina Księżniak, P. Stolarski, W. Abramowicz","doi":"10.18559/ebr.2023.2.736","DOIUrl":null,"url":null,"abstract":"Abstract In this paper the impact of large language models (LLM) on the fake news phenomenon is analysed. On the one hand decent text‐generation capabilities can be misused for mass fake news production. On the other, LLMs trained on huge volumes of text have already accumulated information on many facts thus one may assume they could be used for fact‐checking. Experiments were designed and conducted to verify how much LLM responses are aligned with actual fact‐checking verdicts. The research methodology consists of an experimental dataset preparation and a protocol for interacting with ChatGPT, currently the most sophisticated LLM. A research corpus was explicitly composed for the purpose of this work consisting of several thousand claims randomly selected from claim reviews published by fact‐ checkers. Findings include: it is difficult to align the respons‐ es of ChatGPT with explanations provided by fact‐checkers; prompts have significant impact on the bias of responses. ChatGPT at the current state can be used as a support in fact‐checking but cannot verify claims directly.","PeriodicalId":41557,"journal":{"name":"Economics and Business Review","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Economics and Business Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18559/ebr.2023.2.736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract In this paper the impact of large language models (LLM) on the fake news phenomenon is analysed. On the one hand decent text‐generation capabilities can be misused for mass fake news production. On the other, LLMs trained on huge volumes of text have already accumulated information on many facts thus one may assume they could be used for fact‐checking. Experiments were designed and conducted to verify how much LLM responses are aligned with actual fact‐checking verdicts. The research methodology consists of an experimental dataset preparation and a protocol for interacting with ChatGPT, currently the most sophisticated LLM. A research corpus was explicitly composed for the purpose of this work consisting of several thousand claims randomly selected from claim reviews published by fact‐ checkers. Findings include: it is difficult to align the respons‐ es of ChatGPT with explanations provided by fact‐checkers; prompts have significant impact on the bias of responses. ChatGPT at the current state can be used as a support in fact‐checking but cannot verify claims directly.
人工智能——假新闻活动中的朋友或敌人
摘要本文分析了大语言模型(LLM)对假新闻现象的影响。一方面,良好的文本生成能力可能被滥用于大规模的假新闻生产。另一方面,受过大量文本训练的法学硕士已经积累了许多事实的信息,因此人们可能会认为它们可以用于事实核查。我们设计并进行了实验,以验证法学硕士的反应与实际的事实核查判决有多少一致。研究方法包括实验数据准备和与ChatGPT交互的协议,ChatGPT是目前最复杂的法学硕士。为了这项工作的目的,我们明确地构建了一个研究语料库,其中包括从事实检查员发表的索赔评论中随机选择的数千项索赔。调查结果包括:很难将ChatGPT的回答与事实检查员提供的解释相一致;提示对反应偏差有显著影响。ChatGPT在当前状态下可以用作支持事实检查,但不能直接验证索赔。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.40
自引率
28.60%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信