LQ-FJS:基于LLM的结构化视频摘要引擎逻辑查询挖掘假新闻判断系统

IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jhing-Fa Wang , Din-Yuen Chan , Hsin-Chun Tsai , Bo-Xuan Fang
{"title":"LQ-FJS:基于LLM的结构化视频摘要引擎逻辑查询挖掘假新闻判断系统","authors":"Jhing-Fa Wang ,&nbsp;Din-Yuen Chan ,&nbsp;Hsin-Chun Tsai ,&nbsp;Bo-Xuan Fang","doi":"10.1016/j.datak.2025.102507","DOIUrl":null,"url":null,"abstract":"<div><div>The proliferation of online social platforms can greatly benefit people by fostering remote relationships, but it also inevitably amplifies the impact of multimodal fake news on societal trust and ethics. Existing fake-news detection AI systems are still vulnerable to the inconspicuous and indiscernible multimodal misinformation, and often lacking interpretability and accuracy in cross-platform settings. Hence, we propose a new innovative logical query-digging fake-news judgment system (LQ-FJS) to tackle the above problem based on multimodal approach. The LQ-FJS verifies the truthfulness of claims made within multimedia news by converting video content into structured textual summaries. It then acts as an interpretable agent, explaining the reasons for identified fake news by the structured video-summarization engine (SVSE) to act as an interpretable detection intermediary agent. The SVSE generates condensed captions for raw video content, converting it into structured textual narratives. Then, LQ-FJS exploits these condensed captions to retrieve reliable information related to the video content from LLM. Thus, LQ-FJS cross-verifies external knowledge sources and internal LLM responses to determine whether contradictions exist with factual information through a multimodal inconsistency verification procedure. Our experiments demonstrate that the subtle summarization produced by SVSE can facilitate the generation of explanatory reports that mitigate large-scale trust deficits caused by opaque “black-box” models. Our experiments show that LQ-FJS improves F1 scores by 4.5% and 7.2% compared to state-of-the-art models (FactLLaMA 2023 and HiSS 2023), and increases 14% user trusts through interpretable conclusions.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102507"},"PeriodicalIF":2.7000,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LQ-FJS: A logical query-digging fake-news judgment system with structured video-summarization engine using LLM\",\"authors\":\"Jhing-Fa Wang ,&nbsp;Din-Yuen Chan ,&nbsp;Hsin-Chun Tsai ,&nbsp;Bo-Xuan Fang\",\"doi\":\"10.1016/j.datak.2025.102507\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The proliferation of online social platforms can greatly benefit people by fostering remote relationships, but it also inevitably amplifies the impact of multimodal fake news on societal trust and ethics. Existing fake-news detection AI systems are still vulnerable to the inconspicuous and indiscernible multimodal misinformation, and often lacking interpretability and accuracy in cross-platform settings. Hence, we propose a new innovative logical query-digging fake-news judgment system (LQ-FJS) to tackle the above problem based on multimodal approach. The LQ-FJS verifies the truthfulness of claims made within multimedia news by converting video content into structured textual summaries. It then acts as an interpretable agent, explaining the reasons for identified fake news by the structured video-summarization engine (SVSE) to act as an interpretable detection intermediary agent. The SVSE generates condensed captions for raw video content, converting it into structured textual narratives. Then, LQ-FJS exploits these condensed captions to retrieve reliable information related to the video content from LLM. Thus, LQ-FJS cross-verifies external knowledge sources and internal LLM responses to determine whether contradictions exist with factual information through a multimodal inconsistency verification procedure. Our experiments demonstrate that the subtle summarization produced by SVSE can facilitate the generation of explanatory reports that mitigate large-scale trust deficits caused by opaque “black-box” models. Our experiments show that LQ-FJS improves F1 scores by 4.5% and 7.2% compared to state-of-the-art models (FactLLaMA 2023 and HiSS 2023), and increases 14% user trusts through interpretable conclusions.</div></div>\",\"PeriodicalId\":55184,\"journal\":{\"name\":\"Data & Knowledge Engineering\",\"volume\":\"161 \",\"pages\":\"Article 102507\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data & Knowledge Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0169023X25001028\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data & Knowledge Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169023X25001028","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在线社交平台的激增可以通过培养远程关系极大地造福于人们,但它也不可避免地放大了多模式假新闻对社会信任和道德的影响。现有的假新闻检测人工智能系统仍然容易受到不显眼和难以辨别的多模态错误信息的影响,并且在跨平台设置中往往缺乏可解释性和准确性。因此,我们提出了一种基于多模态方法的创新逻辑查询挖掘假新闻判断系统(LQ-FJS)来解决上述问题。LQ-FJS通过将视频内容转换为结构化的文本摘要来验证多媒体新闻中声明的真实性。然后,它作为一个可解释的代理,通过结构化视频摘要引擎(SVSE)解释识别假新闻的原因,作为一个可解释的检测中介代理。SVSE为原始视频内容生成浓缩的字幕,并将其转换为结构化的文本叙述。然后,LQ-FJS利用这些浓缩的字幕从LLM中检索与视频内容相关的可靠信息。因此,LQ-FJS通过多模态不一致性验证程序,对外部知识来源和内部LLM响应进行交叉验证,以确定是否与事实信息存在矛盾。我们的实验表明,由SVSE产生的微妙总结可以促进解释性报告的生成,从而减轻由不透明的“黑箱”模型引起的大规模信任赤字。我们的实验表明,与最先进的模型(FactLLaMA 2023和HiSS 2023)相比,LQ-FJS将F1分数提高了4.5%和7.2%,并通过可解释的结论增加了14%的用户信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
LQ-FJS: A logical query-digging fake-news judgment system with structured video-summarization engine using LLM
The proliferation of online social platforms can greatly benefit people by fostering remote relationships, but it also inevitably amplifies the impact of multimodal fake news on societal trust and ethics. Existing fake-news detection AI systems are still vulnerable to the inconspicuous and indiscernible multimodal misinformation, and often lacking interpretability and accuracy in cross-platform settings. Hence, we propose a new innovative logical query-digging fake-news judgment system (LQ-FJS) to tackle the above problem based on multimodal approach. The LQ-FJS verifies the truthfulness of claims made within multimedia news by converting video content into structured textual summaries. It then acts as an interpretable agent, explaining the reasons for identified fake news by the structured video-summarization engine (SVSE) to act as an interpretable detection intermediary agent. The SVSE generates condensed captions for raw video content, converting it into structured textual narratives. Then, LQ-FJS exploits these condensed captions to retrieve reliable information related to the video content from LLM. Thus, LQ-FJS cross-verifies external knowledge sources and internal LLM responses to determine whether contradictions exist with factual information through a multimodal inconsistency verification procedure. Our experiments demonstrate that the subtle summarization produced by SVSE can facilitate the generation of explanatory reports that mitigate large-scale trust deficits caused by opaque “black-box” models. Our experiments show that LQ-FJS improves F1 scores by 4.5% and 7.2% compared to state-of-the-art models (FactLLaMA 2023 and HiSS 2023), and increases 14% user trusts through interpretable conclusions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Data & Knowledge Engineering
Data & Knowledge Engineering 工程技术-计算机:人工智能
CiteScore
5.00
自引率
0.00%
发文量
66
审稿时长
6 months
期刊介绍: Data & Knowledge Engineering (DKE) stimulates the exchange of ideas and interaction between these two related fields of interest. DKE reaches a world-wide audience of researchers, designers, managers and users. The major aim of the journal is to identify, investigate and analyze the underlying principles in the design and effective use of these systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信