一种可解释的多视图语义融合模型用于多模态假新闻检测

Zhi Zeng, Mingmin Wu, Guodong Li, Xiang Li, Zhongqiang Huang, Ying Sha
{"title":"一种可解释的多视图语义融合模型用于多模态假新闻检测","authors":"Zhi Zeng, Mingmin Wu, Guodong Li, Xiang Li, Zhongqiang Huang, Ying Sha","doi":"10.1109/ICME55011.2023.00215","DOIUrl":null,"url":null,"abstract":"The existing models have been achieved great success in capturing and fusing miltimodal semantics of news. However, they paid more attention to the global information, ignoring the interactions of global and local semantics and the inconsistency between different modalities. Therefore, we propose an explainable multi-view semantic fusion model (EMSFM), where we aggregate the important inconsistent semantics from local and global views to compensate the global information. Inspired by various forms of artificial fake news and real news, we summarize four views of multimodal correlation: consistency and inconsistency in the local and global views. Integrating these four views, our EMSFM can interpretatively establish global and local fusion between consistent and inconsistent semantics in multimodal relations for fake news detection. The extensive experimental results show that the EMSFM can improve the performance of multimodal fake news detection and provide a novel paradigm for explainable multi-view semantic fusion.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Explainable Multi-view Semantic Fusion Model for Multimodal Fake News Detection\",\"authors\":\"Zhi Zeng, Mingmin Wu, Guodong Li, Xiang Li, Zhongqiang Huang, Ying Sha\",\"doi\":\"10.1109/ICME55011.2023.00215\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The existing models have been achieved great success in capturing and fusing miltimodal semantics of news. However, they paid more attention to the global information, ignoring the interactions of global and local semantics and the inconsistency between different modalities. Therefore, we propose an explainable multi-view semantic fusion model (EMSFM), where we aggregate the important inconsistent semantics from local and global views to compensate the global information. Inspired by various forms of artificial fake news and real news, we summarize four views of multimodal correlation: consistency and inconsistency in the local and global views. Integrating these four views, our EMSFM can interpretatively establish global and local fusion between consistent and inconsistent semantics in multimodal relations for fake news detection. The extensive experimental results show that the EMSFM can improve the performance of multimodal fake news detection and provide a novel paradigm for explainable multi-view semantic fusion.\",\"PeriodicalId\":321830,\"journal\":{\"name\":\"2023 IEEE International Conference on Multimedia and Expo (ICME)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Multimedia and Expo (ICME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICME55011.2023.00215\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME55011.2023.00215","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现有的模型在新闻多模态语义的捕获和融合方面取得了很大的成功。然而,他们更多地关注全局信息,忽略了全局和局部语义的相互作用以及不同模式之间的不一致性。因此,我们提出了一个可解释的多视图语义融合模型(EMSFM),在该模型中,我们从局部和全局视图中聚合重要的不一致语义来补偿全局信息。受各种形式的人工假新闻和真实新闻的启发,我们总结了四种多模态相关观点:局部观点和全局观点的一致性和不一致性。综合这四种观点,我们的EMSFM可以在多模态关系中解释地建立一致和不一致语义之间的全局和局部融合,用于假新闻检测。大量的实验结果表明,EMSFM可以提高多模态假新闻检测的性能,并为可解释的多视图语义融合提供了一种新的范式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Explainable Multi-view Semantic Fusion Model for Multimodal Fake News Detection
The existing models have been achieved great success in capturing and fusing miltimodal semantics of news. However, they paid more attention to the global information, ignoring the interactions of global and local semantics and the inconsistency between different modalities. Therefore, we propose an explainable multi-view semantic fusion model (EMSFM), where we aggregate the important inconsistent semantics from local and global views to compensate the global information. Inspired by various forms of artificial fake news and real news, we summarize four views of multimodal correlation: consistency and inconsistency in the local and global views. Integrating these four views, our EMSFM can interpretatively establish global and local fusion between consistent and inconsistent semantics in multimodal relations for fake news detection. The extensive experimental results show that the EMSFM can improve the performance of multimodal fake news detection and provide a novel paradigm for explainable multi-view semantic fusion.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信