LQMF-RD: A lightweight quantum-driven multi-modal fusion framework for rumor detection

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Keliang Jia, Fanxu Meng, Ziwen Chen, Mengyao Du, Jing Liang
{"title":"LQMF-RD: A lightweight quantum-driven multi-modal fusion framework for rumor detection","authors":"Keliang Jia,&nbsp;Fanxu Meng,&nbsp;Ziwen Chen,&nbsp;Mengyao Du,&nbsp;Jing Liang","doi":"10.1016/j.knosys.2025.114633","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, automated rumor detection has garnered significant attention. Despite notable progress in multi-modal modeling for social media rumor detection, two major challenges remain: (1) the dynamic characteristics of social networks during the propagation process are often overlooked; (2) multi-modal features (such as text, images, and propagation graphs), are often poorly aligned and lead to redundant model parameters. To address these issues, we propose LQMF-RD, a lightweight quantum-driven multi-modal feature fusion framework for rumor detection. First, to capture the dynamic nature of rumor propagation, we design a Dynamic Graph Network (DGN) that leverages the spatiotemporal characteristics of propagation graph, effectively capturing both neighborhood dependencies and temporal evolution among nodes. Then, we employ amplitude encoding to project the extracted multi-modal features into a <span><math><mrow><mo>[</mo><msub><mi>log</mi><mn>2</mn></msub><mi>N</mi><mo>]</mo></mrow></math></span>-dimensional quantum state space. Finally, we construct a Lightweight Quantum-driven Multi-modal Fusion Network (LQMFN), which enables deep interaction and fusion of multi-modal features through quantum convolution and pooling operations. LQMFN updates only 0.01M parameters, substantially reducing computational complexity and storage overhead. Experimental results show that LQMF-RD not only delivers superior performance on rumor detection tasks, but also achieves high computational efficiency and strong robustness to quantum noise.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"330 ","pages":"Article 114633"},"PeriodicalIF":7.6000,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125016727","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, automated rumor detection has garnered significant attention. Despite notable progress in multi-modal modeling for social media rumor detection, two major challenges remain: (1) the dynamic characteristics of social networks during the propagation process are often overlooked; (2) multi-modal features (such as text, images, and propagation graphs), are often poorly aligned and lead to redundant model parameters. To address these issues, we propose LQMF-RD, a lightweight quantum-driven multi-modal feature fusion framework for rumor detection. First, to capture the dynamic nature of rumor propagation, we design a Dynamic Graph Network (DGN) that leverages the spatiotemporal characteristics of propagation graph, effectively capturing both neighborhood dependencies and temporal evolution among nodes. Then, we employ amplitude encoding to project the extracted multi-modal features into a [log2N]-dimensional quantum state space. Finally, we construct a Lightweight Quantum-driven Multi-modal Fusion Network (LQMFN), which enables deep interaction and fusion of multi-modal features through quantum convolution and pooling operations. LQMFN updates only 0.01M parameters, substantially reducing computational complexity and storage overhead. Experimental results show that LQMF-RD not only delivers superior performance on rumor detection tasks, but also achieves high computational efficiency and strong robustness to quantum noise.
LQMF-RD:一种用于谣言检测的轻量级量子驱动多模态融合框架
近年来,自动化谣言检测已经引起了人们的极大关注。尽管多模态建模在社交媒体谣言检测方面取得了显著进展,但仍然存在两个主要挑战:(1)社交网络在传播过程中的动态特征往往被忽视;(2)多模态特征(如文本、图像和传播图)往往对齐不良,导致模型参数冗余。为了解决这些问题,我们提出了LQMF-RD,一种用于谣言检测的轻量级量子驱动多模态特征融合框架。首先,为了捕捉谣言传播的动态特性,我们设计了一个动态图网络(dynamic Graph Network, DGN),利用传播图的时空特征,有效捕捉节点间的邻域依赖和时间演化。然后,我们使用幅度编码将提取的多模态特征投影到[log2N]维量子态空间中。最后,我们构建了一个轻量级量子驱动的多模态融合网络(LQMFN),该网络通过量子卷积和池化操作实现了多模态特征的深度交互和融合。LQMFN仅更新0.01M个参数,大大降低了计算复杂性和存储开销。实验结果表明,LQMF-RD不仅在谣言检测任务上表现优异,而且计算效率高,对量子噪声具有较强的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信