面向真实性的上下文感知的基于大语言模型的虚假新闻检测提示优化

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Weiqiang Jin, Yang Gao, Tao Tao, Xiujun Wang, Ningwei Wang, Baohai Wu, Biao Zhao
{"title":"面向真实性的上下文感知的基于大语言模型的虚假新闻检测提示优化","authors":"Weiqiang Jin,&nbsp;Yang Gao,&nbsp;Tao Tao,&nbsp;Xiujun Wang,&nbsp;Ningwei Wang,&nbsp;Baohai Wu,&nbsp;Biao Zhao","doi":"10.1155/int/5920142","DOIUrl":null,"url":null,"abstract":"<div>\n <p>Fake news detection (FND) is a critical task in natural language processing (NLP) focused on identifying and mitigating the spread of misinformation. Large language models (LLMs) have recently shown remarkable abilities in understanding semantics and performing logical inference. However, their tendency to generate hallucinations poses significant challenges in accurately detecting deceptive content, leading to suboptimal performance. In addition, existing FND methods often underutilize the extensive prior knowledge embedded within LLMs, resulting in less effective classification outcomes. To address these issues, we propose the CAPE–FND framework, context-aware prompt engineering, designed for enhancing FND tasks. This framework employs unique veracity-oriented context-aware constraints, background information, and analogical reasoning to mitigate LLM hallucinations and utilizes self-adaptive bootstrap prompting optimization to improve LLM predictions. It further refines initial LLM prompts through adaptive iterative optimization using a random search bootstrap algorithm, maximizing the efficacy of LLM prompting. Extensive zero-shot and few-shot experiments using GPT-3.5-turbo across multiple public datasets demonstrate the effectiveness and robustness of our CAPE–FND framework, even surpassing advanced GPT-4.0 and human performance in certain scenarios. To support further LLM–based FND, we have made our approach’s code publicly available on GitHub (our CAPE–FND code: https://github.com/albert-jin/CAPE-FND [Accessed on 2024.09]).</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/5920142","citationCount":"0","resultStr":"{\"title\":\"Veracity-Oriented Context-Aware Large Language Models–Based Prompting Optimization for Fake News Detection\",\"authors\":\"Weiqiang Jin,&nbsp;Yang Gao,&nbsp;Tao Tao,&nbsp;Xiujun Wang,&nbsp;Ningwei Wang,&nbsp;Baohai Wu,&nbsp;Biao Zhao\",\"doi\":\"10.1155/int/5920142\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n <p>Fake news detection (FND) is a critical task in natural language processing (NLP) focused on identifying and mitigating the spread of misinformation. Large language models (LLMs) have recently shown remarkable abilities in understanding semantics and performing logical inference. However, their tendency to generate hallucinations poses significant challenges in accurately detecting deceptive content, leading to suboptimal performance. In addition, existing FND methods often underutilize the extensive prior knowledge embedded within LLMs, resulting in less effective classification outcomes. To address these issues, we propose the CAPE–FND framework, context-aware prompt engineering, designed for enhancing FND tasks. This framework employs unique veracity-oriented context-aware constraints, background information, and analogical reasoning to mitigate LLM hallucinations and utilizes self-adaptive bootstrap prompting optimization to improve LLM predictions. It further refines initial LLM prompts through adaptive iterative optimization using a random search bootstrap algorithm, maximizing the efficacy of LLM prompting. Extensive zero-shot and few-shot experiments using GPT-3.5-turbo across multiple public datasets demonstrate the effectiveness and robustness of our CAPE–FND framework, even surpassing advanced GPT-4.0 and human performance in certain scenarios. To support further LLM–based FND, we have made our approach’s code publicly available on GitHub (our CAPE–FND code: https://github.com/albert-jin/CAPE-FND [Accessed on 2024.09]).</p>\\n </div>\",\"PeriodicalId\":14089,\"journal\":{\"name\":\"International Journal of Intelligent Systems\",\"volume\":\"2025 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/5920142\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/int/5920142\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/int/5920142","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

假新闻检测(FND)是自然语言处理(NLP)中的一项关键任务,其重点是识别和减轻错误信息的传播。近年来,大型语言模型(llm)在理解语义和执行逻辑推理方面表现出了非凡的能力。然而,它们产生幻觉的倾向给准确检测欺骗性内容带来了重大挑战,导致性能不佳。此外,现有的FND方法往往没有充分利用llm中嵌入的大量先验知识,导致分类结果的有效性较低。为了解决这些问题,我们提出了CAPE-FND框架,即上下文感知提示工程,旨在增强FND任务。该框架采用独特的面向真实性的上下文感知约束、背景信息和类比推理来减轻LLM幻觉,并利用自适应引导提示优化来改进LLM预测。通过使用随机搜索自举算法的自适应迭代优化,进一步改进初始LLM提示,使LLM提示的功效最大化。在多个公共数据集上使用gpt -3.5 turbo进行的大量零射击和少射击实验证明了我们的CAPE-FND框架的有效性和鲁棒性,在某些情况下甚至超过了先进的GPT-4.0和人类的表现。为了进一步支持基于llm的FND,我们已经在GitHub上公开了我们的方法代码(我们的CAPE-FND代码:https://github.com/albert-jin/CAPE-FND[于2024.09年访问])。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Veracity-Oriented Context-Aware Large Language Models–Based Prompting Optimization for Fake News Detection

Veracity-Oriented Context-Aware Large Language Models–Based Prompting Optimization for Fake News Detection

Fake news detection (FND) is a critical task in natural language processing (NLP) focused on identifying and mitigating the spread of misinformation. Large language models (LLMs) have recently shown remarkable abilities in understanding semantics and performing logical inference. However, their tendency to generate hallucinations poses significant challenges in accurately detecting deceptive content, leading to suboptimal performance. In addition, existing FND methods often underutilize the extensive prior knowledge embedded within LLMs, resulting in less effective classification outcomes. To address these issues, we propose the CAPE–FND framework, context-aware prompt engineering, designed for enhancing FND tasks. This framework employs unique veracity-oriented context-aware constraints, background information, and analogical reasoning to mitigate LLM hallucinations and utilizes self-adaptive bootstrap prompting optimization to improve LLM predictions. It further refines initial LLM prompts through adaptive iterative optimization using a random search bootstrap algorithm, maximizing the efficacy of LLM prompting. Extensive zero-shot and few-shot experiments using GPT-3.5-turbo across multiple public datasets demonstrate the effectiveness and robustness of our CAPE–FND framework, even surpassing advanced GPT-4.0 and human performance in certain scenarios. To support further LLM–based FND, we have made our approach’s code publicly available on GitHub (our CAPE–FND code: https://github.com/albert-jin/CAPE-FND [Accessed on 2024.09]).

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Intelligent Systems
International Journal of Intelligent Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
14.30%
发文量
304
审稿时长
9 months
期刊介绍: The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信