A Quantum Multimodal Neural Network Model for Sentiment Analysis on Quantum Circuits

Jin Zheng;Qing Gao;Daoyi Dong;Jinhu Lü;Yue Deng
{"title":"A Quantum Multimodal Neural Network Model for Sentiment Analysis on Quantum Circuits","authors":"Jin Zheng;Qing Gao;Daoyi Dong;Jinhu Lü;Yue Deng","doi":"10.1109/TAI.2024.3511514","DOIUrl":null,"url":null,"abstract":"This article proposes a quantum multimodal neural network (QMNN) model that can be implemented on parameterized quantum circuits (PQCs), providing a novel avenue for processing multimodal data and performing advanced multimodal sentiment analysis tasks. The comprehensive QMNN model is structured into four fundamental blocks: multimodal data preprocessing, unimodal feature extraction, multimodal feature fusion, and optimization. Through these blocks, multimodal data are initially preprocessed and encoded into quantum states. Subsequently, visual and textual features are extracted from the quantum states and are then integrated to learn the interactions between different modalities. Finally, the model parameters are fine-tuned to optimize the sentiment analysis performance. Simulation results confirm that QMNN surpasses state-of-the-art baselines, using significantly lower input dimensions and substantially fewer parameters than classical models. Furthermore, the entanglement, integrity, robustness, and scalability of the model are analyzed in depth. Internally, the strong entanglement within the multimodal fusion block enhances interactions between textual and visual features, and the integrity of the model reflects the indispensable contribution of each component to the overall performance. Externally, robustness ensures the model operates stably under noisy conditions and incomplete inputs, and scalability enables it to efficiently adapt to varying architectural depths and widths. The above simulation results and performance analyses showcase the comprehensive strength of our proposed model.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 5","pages":"1128-1142"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10778283/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This article proposes a quantum multimodal neural network (QMNN) model that can be implemented on parameterized quantum circuits (PQCs), providing a novel avenue for processing multimodal data and performing advanced multimodal sentiment analysis tasks. The comprehensive QMNN model is structured into four fundamental blocks: multimodal data preprocessing, unimodal feature extraction, multimodal feature fusion, and optimization. Through these blocks, multimodal data are initially preprocessed and encoded into quantum states. Subsequently, visual and textual features are extracted from the quantum states and are then integrated to learn the interactions between different modalities. Finally, the model parameters are fine-tuned to optimize the sentiment analysis performance. Simulation results confirm that QMNN surpasses state-of-the-art baselines, using significantly lower input dimensions and substantially fewer parameters than classical models. Furthermore, the entanglement, integrity, robustness, and scalability of the model are analyzed in depth. Internally, the strong entanglement within the multimodal fusion block enhances interactions between textual and visual features, and the integrity of the model reflects the indispensable contribution of each component to the overall performance. Externally, robustness ensures the model operates stably under noisy conditions and incomplete inputs, and scalability enables it to efficiently adapt to varying architectural depths and widths. The above simulation results and performance analyses showcase the comprehensive strength of our proposed model.
用于量子电路情感分析的量子多模态神经网络模型
本文提出了一种可在参数化量子电路(pqc)上实现的量子多模态神经网络(QMNN)模型,为处理多模态数据和执行高级多模态情感分析任务提供了一种新的途径。综合QMNN模型分为四个基本模块:多模态数据预处理、单模态特征提取、多模态特征融合和优化。通过这些块,多模态数据最初被预处理并编码为量子态。随后,从量子态中提取视觉和文本特征,然后集成以学习不同模态之间的相互作用。最后,对模型参数进行微调,优化情感分析性能。仿真结果证实,QMNN使用比经典模型更低的输入维度和更少的参数,超过了最先进的基线。在此基础上,对模型的纠缠性、完整性、鲁棒性和可扩展性进行了深入分析。在内部,多模态融合块内部的强纠缠增强了文本特征和视觉特征之间的相互作用,模型的完整性反映了每个组件对整体性能的不可或缺的贡献。外部,鲁棒性确保模型在噪声条件和不完整输入下稳定运行,可扩展性使其能够有效地适应不同的建筑深度和宽度。上述仿真结果和性能分析显示了我们提出的模型的综合实力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信