一种用于学习可转移视觉表征的混合量子经典神经网络

IF 5.6 2区 物理与天体物理 Q1 PHYSICS, MULTIDISCIPLINARY
Ruhan Wang, P. Richerme, Fan Chen
{"title":"一种用于学习可转移视觉表征的混合量子经典神经网络","authors":"Ruhan Wang, P. Richerme, Fan Chen","doi":"10.1088/2058-9565/acf1c7","DOIUrl":null,"url":null,"abstract":"State-of-the-art quantum machine learning (QML) algorithms fail to offer practical advantages over their notoriously powerful classical counterparts, due to the limited learning capabilities of QML algorithms, the constrained computational resources available on today’s noisy intermediate-scale quantum (NISQ) devices, and the empirically designed circuit ansatz for QML models. In this work, we address these challenges by proposing a hybrid quantum–classical neural network (CaNN), which we call QCLIP, for Quantum Contrastive Language-Image Pre-Training. Rather than training a supervised QML model to predict human annotations, QCLIP focuses on more practical transferable visual representation learning, where the developed model can be generalized to work on unseen downstream datasets. QCLIP is implemented by using CaNNs to generate low-dimensional data feature embeddings followed by quantum neural networks to adapt and generalize the learned representation in the quantum Hilbert space. Experimental results show that the hybrid QCLIP model can be efficiently trained for representation learning. We evaluate the representation transfer capability of QCLIP against the classical Contrastive Language-Image Pre-Training model on various datasets. Simulation results and real-device results on NISQ IBM_Auckland quantum computer both show that the proposed QCLIP model outperforms the classical CLIP model in all test cases. As the field of QML on NISQ devices is continually evolving, we anticipate that this work will serve as a valuable foundation for future research and advancements in this promising area.","PeriodicalId":20821,"journal":{"name":"Quantum Science and Technology","volume":"111 1","pages":""},"PeriodicalIF":5.6000,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A hybrid quantum–classical neural network for learning transferable visual representation\",\"authors\":\"Ruhan Wang, P. Richerme, Fan Chen\",\"doi\":\"10.1088/2058-9565/acf1c7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"State-of-the-art quantum machine learning (QML) algorithms fail to offer practical advantages over their notoriously powerful classical counterparts, due to the limited learning capabilities of QML algorithms, the constrained computational resources available on today’s noisy intermediate-scale quantum (NISQ) devices, and the empirically designed circuit ansatz for QML models. In this work, we address these challenges by proposing a hybrid quantum–classical neural network (CaNN), which we call QCLIP, for Quantum Contrastive Language-Image Pre-Training. Rather than training a supervised QML model to predict human annotations, QCLIP focuses on more practical transferable visual representation learning, where the developed model can be generalized to work on unseen downstream datasets. QCLIP is implemented by using CaNNs to generate low-dimensional data feature embeddings followed by quantum neural networks to adapt and generalize the learned representation in the quantum Hilbert space. Experimental results show that the hybrid QCLIP model can be efficiently trained for representation learning. We evaluate the representation transfer capability of QCLIP against the classical Contrastive Language-Image Pre-Training model on various datasets. Simulation results and real-device results on NISQ IBM_Auckland quantum computer both show that the proposed QCLIP model outperforms the classical CLIP model in all test cases. As the field of QML on NISQ devices is continually evolving, we anticipate that this work will serve as a valuable foundation for future research and advancements in this promising area.\",\"PeriodicalId\":20821,\"journal\":{\"name\":\"Quantum Science and Technology\",\"volume\":\"111 1\",\"pages\":\"\"},\"PeriodicalIF\":5.6000,\"publicationDate\":\"2023-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Quantum Science and Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.1088/2058-9565/acf1c7\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PHYSICS, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quantum Science and Technology","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1088/2058-9565/acf1c7","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

由于量子机器学习(QML)算法的学习能力有限,当今嘈杂的中等规模量子(NISQ)设备上可用的计算资源有限,以及QML模型的经验设计电路分析,最先进的量子机器学习(QML)算法无法提供比其众所周知的强大经典算法更实用的优势。在这项工作中,我们通过提出一种混合量子经典神经网络(CaNN)来解决这些挑战,我们称之为QCLIP,用于量子对比语言-图像预训练。QCLIP不是训练有监督的QML模型来预测人类注释,而是专注于更实用的可转移视觉表示学习,其中开发的模型可以推广到看不见的下游数据集。QCLIP是通过使用can生成低维数据特征嵌入,然后使用量子神经网络在量子希尔伯特空间中适应和推广学习到的表示来实现的。实验结果表明,混合QCLIP模型可以有效地训练用于表示学习。在不同的数据集上,对比经典的对比语言图像预训练模型,对QCLIP的表示迁移能力进行了评估。在NISQ IBM_Auckland量子计算机上的仿真结果和实际设备结果都表明,所提出的QCLIP模型在所有测试用例中都优于经典CLIP模型。随着NISQ设备上的QML领域的不断发展,我们预计这项工作将为这个有前途的领域的未来研究和进步奠定宝贵的基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A hybrid quantum–classical neural network for learning transferable visual representation
State-of-the-art quantum machine learning (QML) algorithms fail to offer practical advantages over their notoriously powerful classical counterparts, due to the limited learning capabilities of QML algorithms, the constrained computational resources available on today’s noisy intermediate-scale quantum (NISQ) devices, and the empirically designed circuit ansatz for QML models. In this work, we address these challenges by proposing a hybrid quantum–classical neural network (CaNN), which we call QCLIP, for Quantum Contrastive Language-Image Pre-Training. Rather than training a supervised QML model to predict human annotations, QCLIP focuses on more practical transferable visual representation learning, where the developed model can be generalized to work on unseen downstream datasets. QCLIP is implemented by using CaNNs to generate low-dimensional data feature embeddings followed by quantum neural networks to adapt and generalize the learned representation in the quantum Hilbert space. Experimental results show that the hybrid QCLIP model can be efficiently trained for representation learning. We evaluate the representation transfer capability of QCLIP against the classical Contrastive Language-Image Pre-Training model on various datasets. Simulation results and real-device results on NISQ IBM_Auckland quantum computer both show that the proposed QCLIP model outperforms the classical CLIP model in all test cases. As the field of QML on NISQ devices is continually evolving, we anticipate that this work will serve as a valuable foundation for future research and advancements in this promising area.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Quantum Science and Technology
Quantum Science and Technology Materials Science-Materials Science (miscellaneous)
CiteScore
11.20
自引率
3.00%
发文量
133
期刊介绍: Driven by advances in technology and experimental capability, the last decade has seen the emergence of quantum technology: a new praxis for controlling the quantum world. It is now possible to engineer complex, multi-component systems that merge the once distinct fields of quantum optics and condensed matter physics. Quantum Science and Technology is a new multidisciplinary, electronic-only journal, devoted to publishing research of the highest quality and impact covering theoretical and experimental advances in the fundamental science and application of all quantum-enabled technologies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信