DCCMA-Net:基于解纠缠的多模态线索挖掘和聚合网络的可解释多模态假新闻检测

IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Siqi Wei, Zheng Wang, Meiling Li, Xuanning Liu, Bin Wu
{"title":"DCCMA-Net:基于解纠缠的多模态线索挖掘和聚合网络的可解释多模态假新闻检测","authors":"Siqi Wei,&nbsp;Zheng Wang,&nbsp;Meiling Li,&nbsp;Xuanning Liu,&nbsp;Bin Wu","doi":"10.1016/j.ipm.2025.104089","DOIUrl":null,"url":null,"abstract":"<div><div>Multimodal fake news detection is significant in safeguarding social security. Compared with single-text news, multimodal news data contains rich cross-modal clues that can improve the detection effectiveness: modality-common semantic enhancement, modality-specific semantic complementation, and modality-specific semantic inconsistency. However, most existing studies ignore the disentanglement of modality-specific and modality-common semantics but treat them as an entangled whole. Consequently, these studies can only implicitly explore the interactions between modalities, resulting in a lack of explainability. To address that, we propose a Disentanglement-based Cross-modal Clues Mining and Aggregation Network for explainable fake news detection, called DCCMA-Net. Specifically, DCCMA-Net decomposes each modality into two distinct representations: a modality-common representation that captures shared semantics across modalities, and a modality-specific representation that captures unique semantics within each modality. Then, leveraging these disentangled representations, DCCMA-Net explicitly and comprehensively mines three cross-modal clues: modality-common semantic enhancement, modality-specific semantic complementation, and modality-specific semantic inconsistency. Since not all clues play an equal role in the decision-making process, DCCMA-Net proposes an adaptive attention aggregation module to assign contribution weights to different clues. Finally, DCCMA-Net aggregates these clues based on their contribution weights to obtain highly discriminative news representations for detection, and highlights the most contributive clues as explanations for the detection results. Extensive experiments demonstrate that DCCMA-Net outperforms existing methods, achieving detection accuracy improvements of 2.53%, 4.01%, and 3.99% on Weibo, PHEME, and Gossipcop datasets, respectively. Moreover, the explainability accuracy of DCCMA-Net exceeds that of current state-of-the-art methods on the Weibo dataset.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 4","pages":"Article 104089"},"PeriodicalIF":7.4000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DCCMA-Net: Disentanglement-based cross-modal clues mining and aggregation network for explainable multimodal fake news detection\",\"authors\":\"Siqi Wei,&nbsp;Zheng Wang,&nbsp;Meiling Li,&nbsp;Xuanning Liu,&nbsp;Bin Wu\",\"doi\":\"10.1016/j.ipm.2025.104089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multimodal fake news detection is significant in safeguarding social security. Compared with single-text news, multimodal news data contains rich cross-modal clues that can improve the detection effectiveness: modality-common semantic enhancement, modality-specific semantic complementation, and modality-specific semantic inconsistency. However, most existing studies ignore the disentanglement of modality-specific and modality-common semantics but treat them as an entangled whole. Consequently, these studies can only implicitly explore the interactions between modalities, resulting in a lack of explainability. To address that, we propose a Disentanglement-based Cross-modal Clues Mining and Aggregation Network for explainable fake news detection, called DCCMA-Net. Specifically, DCCMA-Net decomposes each modality into two distinct representations: a modality-common representation that captures shared semantics across modalities, and a modality-specific representation that captures unique semantics within each modality. Then, leveraging these disentangled representations, DCCMA-Net explicitly and comprehensively mines three cross-modal clues: modality-common semantic enhancement, modality-specific semantic complementation, and modality-specific semantic inconsistency. Since not all clues play an equal role in the decision-making process, DCCMA-Net proposes an adaptive attention aggregation module to assign contribution weights to different clues. Finally, DCCMA-Net aggregates these clues based on their contribution weights to obtain highly discriminative news representations for detection, and highlights the most contributive clues as explanations for the detection results. Extensive experiments demonstrate that DCCMA-Net outperforms existing methods, achieving detection accuracy improvements of 2.53%, 4.01%, and 3.99% on Weibo, PHEME, and Gossipcop datasets, respectively. Moreover, the explainability accuracy of DCCMA-Net exceeds that of current state-of-the-art methods on the Weibo dataset.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"62 4\",\"pages\":\"Article 104089\"},\"PeriodicalIF\":7.4000,\"publicationDate\":\"2025-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457325000317\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325000317","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

多模式假新闻侦查对维护社会安全具有重要意义。与单文本新闻相比,多模态新闻数据包含丰富的跨模态线索,可以提高检测效率:模态共同的语义增强、模态特定的语义补充和模态特定的语义不一致。然而,现有的研究大多忽略了情态特定语义和情态共同语义的分离,而是将它们视为一个纠缠的整体。因此,这些研究只能隐含地探索模式之间的相互作用,导致缺乏可解释性。为了解决这个问题,我们提出了一个基于解纠缠的跨模态线索挖掘和聚合网络,用于可解释的假新闻检测,称为DCCMA-Net。具体来说,DCCMA-Net将每种模态分解为两种不同的表示:捕获跨模态共享语义的模态通用表示,以及捕获每个模态内唯一语义的模态特定表示。然后,利用这些解耦表示,DCCMA-Net明确而全面地挖掘了三个跨模态线索:模态共同的语义增强、模态特定的语义互补和模态特定的语义不一致。鉴于并非所有线索在决策过程中的作用都是平等的,DCCMA-Net提出了一种自适应注意力聚合模块,为不同线索分配贡献权重。最后,DCCMA-Net根据这些线索的贡献权重聚合这些线索,以获得用于检测的高判别性新闻表示,并突出贡献最大的线索作为检测结果的解释。大量实验表明,DCCMA-Net优于现有方法,在微博、PHEME和gossip pcop数据集上的检测准确率分别提高了2.53%、4.01%和3.99%。此外,DCCMA-Net在微博数据集上的可解释性精度超过了目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DCCMA-Net: Disentanglement-based cross-modal clues mining and aggregation network for explainable multimodal fake news detection
Multimodal fake news detection is significant in safeguarding social security. Compared with single-text news, multimodal news data contains rich cross-modal clues that can improve the detection effectiveness: modality-common semantic enhancement, modality-specific semantic complementation, and modality-specific semantic inconsistency. However, most existing studies ignore the disentanglement of modality-specific and modality-common semantics but treat them as an entangled whole. Consequently, these studies can only implicitly explore the interactions between modalities, resulting in a lack of explainability. To address that, we propose a Disentanglement-based Cross-modal Clues Mining and Aggregation Network for explainable fake news detection, called DCCMA-Net. Specifically, DCCMA-Net decomposes each modality into two distinct representations: a modality-common representation that captures shared semantics across modalities, and a modality-specific representation that captures unique semantics within each modality. Then, leveraging these disentangled representations, DCCMA-Net explicitly and comprehensively mines three cross-modal clues: modality-common semantic enhancement, modality-specific semantic complementation, and modality-specific semantic inconsistency. Since not all clues play an equal role in the decision-making process, DCCMA-Net proposes an adaptive attention aggregation module to assign contribution weights to different clues. Finally, DCCMA-Net aggregates these clues based on their contribution weights to obtain highly discriminative news representations for detection, and highlights the most contributive clues as explanations for the detection results. Extensive experiments demonstrate that DCCMA-Net outperforms existing methods, achieving detection accuracy improvements of 2.53%, 4.01%, and 3.99% on Weibo, PHEME, and Gossipcop datasets, respectively. Moreover, the explainability accuracy of DCCMA-Net exceeds that of current state-of-the-art methods on the Weibo dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信