{"title":"Silver Lining in the Fake News Cloud: Can Large Language Models Help Detect Misinformation?","authors":"Raghvendra Kumar;Bhargav Goddu;Sriparna Saha;Adam Jatowt","doi":"10.1109/TAI.2024.3440248","DOIUrl":null,"url":null,"abstract":"In the times of advanced generative artificial intelligence, distinguishing truth from fallacy and deception has become a critical societal challenge. This research attempts to analyze the capabilities of large language models (LLMs) for detecting misinformation. Our study employs a versatile approach, covering multiple LLMs with few- and zero-shot prompting. These models are rigorously evaluated across various fake news and rumor detection datasets. Introducing a novel dimension, we additionally incorporate sentiment and emotion annotations to understand the emotional influence on misinformation detection using LLMs. Moreover, to extend our inquiry, we employ ChatGPT to intentionally distort authentic news as well as human-written fake news, utilizing zero-shot and iterative prompts. This deliberate corruption allows for a detailed examination of various parameters such as abstractness, concreteness, and named entity density, providing insights into differentiating between unaltered news, human-written fake news, and its LLM-corrupted counterpart. Our findings aspire to furnish a refined framework for discerning authentic news, human-generated misinformation, and LLM-induced distortions. This multifaceted approach, utilizing various prompt techniques, contributes to a comprehensive understanding of the subtle variations shaping misinformation sources.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 1","pages":"14-24"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10631663/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the times of advanced generative artificial intelligence, distinguishing truth from fallacy and deception has become a critical societal challenge. This research attempts to analyze the capabilities of large language models (LLMs) for detecting misinformation. Our study employs a versatile approach, covering multiple LLMs with few- and zero-shot prompting. These models are rigorously evaluated across various fake news and rumor detection datasets. Introducing a novel dimension, we additionally incorporate sentiment and emotion annotations to understand the emotional influence on misinformation detection using LLMs. Moreover, to extend our inquiry, we employ ChatGPT to intentionally distort authentic news as well as human-written fake news, utilizing zero-shot and iterative prompts. This deliberate corruption allows for a detailed examination of various parameters such as abstractness, concreteness, and named entity density, providing insights into differentiating between unaltered news, human-written fake news, and its LLM-corrupted counterpart. Our findings aspire to furnish a refined framework for discerning authentic news, human-generated misinformation, and LLM-induced distortions. This multifaceted approach, utilizing various prompt techniques, contributes to a comprehensive understanding of the subtle variations shaping misinformation sources.