Keliang Jia, Fanxu Meng, Ziwen Chen, Mengyao Du, Jing Liang
{"title":"LQMF-RD:一种用于谣言检测的轻量级量子驱动多模态融合框架","authors":"Keliang Jia, Fanxu Meng, Ziwen Chen, Mengyao Du, Jing Liang","doi":"10.1016/j.knosys.2025.114633","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, automated rumor detection has garnered significant attention. Despite notable progress in multi-modal modeling for social media rumor detection, two major challenges remain: (1) the dynamic characteristics of social networks during the propagation process are often overlooked; (2) multi-modal features (such as text, images, and propagation graphs), are often poorly aligned and lead to redundant model parameters. To address these issues, we propose LQMF-RD, a lightweight quantum-driven multi-modal feature fusion framework for rumor detection. First, to capture the dynamic nature of rumor propagation, we design a Dynamic Graph Network (DGN) that leverages the spatiotemporal characteristics of propagation graph, effectively capturing both neighborhood dependencies and temporal evolution among nodes. Then, we employ amplitude encoding to project the extracted multi-modal features into a <span><math><mrow><mo>[</mo><msub><mi>log</mi><mn>2</mn></msub><mi>N</mi><mo>]</mo></mrow></math></span>-dimensional quantum state space. Finally, we construct a Lightweight Quantum-driven Multi-modal Fusion Network (LQMFN), which enables deep interaction and fusion of multi-modal features through quantum convolution and pooling operations. LQMFN updates only 0.01M parameters, substantially reducing computational complexity and storage overhead. Experimental results show that LQMF-RD not only delivers superior performance on rumor detection tasks, but also achieves high computational efficiency and strong robustness to quantum noise.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"330 ","pages":"Article 114633"},"PeriodicalIF":7.6000,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LQMF-RD: A lightweight quantum-driven multi-modal fusion framework for rumor detection\",\"authors\":\"Keliang Jia, Fanxu Meng, Ziwen Chen, Mengyao Du, Jing Liang\",\"doi\":\"10.1016/j.knosys.2025.114633\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, automated rumor detection has garnered significant attention. Despite notable progress in multi-modal modeling for social media rumor detection, two major challenges remain: (1) the dynamic characteristics of social networks during the propagation process are often overlooked; (2) multi-modal features (such as text, images, and propagation graphs), are often poorly aligned and lead to redundant model parameters. To address these issues, we propose LQMF-RD, a lightweight quantum-driven multi-modal feature fusion framework for rumor detection. First, to capture the dynamic nature of rumor propagation, we design a Dynamic Graph Network (DGN) that leverages the spatiotemporal characteristics of propagation graph, effectively capturing both neighborhood dependencies and temporal evolution among nodes. Then, we employ amplitude encoding to project the extracted multi-modal features into a <span><math><mrow><mo>[</mo><msub><mi>log</mi><mn>2</mn></msub><mi>N</mi><mo>]</mo></mrow></math></span>-dimensional quantum state space. Finally, we construct a Lightweight Quantum-driven Multi-modal Fusion Network (LQMFN), which enables deep interaction and fusion of multi-modal features through quantum convolution and pooling operations. LQMFN updates only 0.01M parameters, substantially reducing computational complexity and storage overhead. Experimental results show that LQMF-RD not only delivers superior performance on rumor detection tasks, but also achieves high computational efficiency and strong robustness to quantum noise.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"330 \",\"pages\":\"Article 114633\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125016727\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125016727","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
LQMF-RD: A lightweight quantum-driven multi-modal fusion framework for rumor detection
In recent years, automated rumor detection has garnered significant attention. Despite notable progress in multi-modal modeling for social media rumor detection, two major challenges remain: (1) the dynamic characteristics of social networks during the propagation process are often overlooked; (2) multi-modal features (such as text, images, and propagation graphs), are often poorly aligned and lead to redundant model parameters. To address these issues, we propose LQMF-RD, a lightweight quantum-driven multi-modal feature fusion framework for rumor detection. First, to capture the dynamic nature of rumor propagation, we design a Dynamic Graph Network (DGN) that leverages the spatiotemporal characteristics of propagation graph, effectively capturing both neighborhood dependencies and temporal evolution among nodes. Then, we employ amplitude encoding to project the extracted multi-modal features into a -dimensional quantum state space. Finally, we construct a Lightweight Quantum-driven Multi-modal Fusion Network (LQMFN), which enables deep interaction and fusion of multi-modal features through quantum convolution and pooling operations. LQMFN updates only 0.01M parameters, substantially reducing computational complexity and storage overhead. Experimental results show that LQMF-RD not only delivers superior performance on rumor detection tasks, but also achieves high computational efficiency and strong robustness to quantum noise.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.