Experimental robustness benchmarking of quantum neural networks on a superconducting quantum processor

IF 7.5 1区 物理与天体物理 Q1 PHYSICS, MULTIDISCIPLINARY
Hai-Feng Zhang, Zhao-Yun Chen, Peng Wang, Liang-Liang Guo, Tian-Le Wang, Xiao-Yan Yang, Ren-Ze Zhao, Ze-An Zhao, Sheng Zhang, Lei Du, Hao-Ran Tao, Zhi-Long Jia, Wei-Cheng Kong, Huan-Yu Liu, Athanasios V. Vasilakos, Yang Yang, Yu-Chun Wu, Ji Guan, Peng Duan, Guo-Ping Guo
{"title":"Experimental robustness benchmarking of quantum neural networks on a superconducting quantum processor","authors":"Hai-Feng Zhang,&nbsp;Zhao-Yun Chen,&nbsp;Peng Wang,&nbsp;Liang-Liang Guo,&nbsp;Tian-Le Wang,&nbsp;Xiao-Yan Yang,&nbsp;Ren-Ze Zhao,&nbsp;Ze-An Zhao,&nbsp;Sheng Zhang,&nbsp;Lei Du,&nbsp;Hao-Ran Tao,&nbsp;Zhi-Long Jia,&nbsp;Wei-Cheng Kong,&nbsp;Huan-Yu Liu,&nbsp;Athanasios V. Vasilakos,&nbsp;Yang Yang,&nbsp;Yu-Chun Wu,&nbsp;Ji Guan,&nbsp;Peng Duan,&nbsp;Guo-Ping Guo","doi":"10.1007/s11433-025-2943-6","DOIUrl":null,"url":null,"abstract":"<div><p>Quantum machine learning (QML) models, like their classical counterparts, are intrinsically vulnerable to adversarial attacks, hindering their secure deployment. Here, we report the first systematic experimental benchmark of robustness for 20-qubit quantum neural network (QNN) classifiers executed on a superconducting processor. Our benchmarking protocol features an efficient adversarial attack algorithm tailored for quantum hardware, enabling the diagnosis of QNN’s robustness across diverse datasets. The empirical upper bound extracted from our attack experiments deviates by only 3 × 10<sup>−3</sup> from the analytical lower bound, providing strong experimental confirmation of our attack’s precision and the tightness of the fidelity-based robustness bounds. Furthermore, our quantitative analysis reveals that adversarial training mitigates sensitivity to targeted perturbations by regularizing input gradients, thereby significantly enhancing QNN robustness. Additionally, we observe that experimentally measured QNNs exhibit higher adversarial robustness than classical neural networks, an effect attributed to inherent quantum noise. Our work establishes the first scalable and experimentally accessible framework for robustness benchmarking, paving the way for secure and reliable QML applications.</p></div>","PeriodicalId":774,"journal":{"name":"Science China Physics, Mechanics & Astronomy","volume":"69 6","pages":""},"PeriodicalIF":7.5000,"publicationDate":"2026-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science China Physics, Mechanics & Astronomy","FirstCategoryId":"101","ListUrlMain":"https://link.springer.com/article/10.1007/s11433-025-2943-6","RegionNum":1,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Quantum machine learning (QML) models, like their classical counterparts, are intrinsically vulnerable to adversarial attacks, hindering their secure deployment. Here, we report the first systematic experimental benchmark of robustness for 20-qubit quantum neural network (QNN) classifiers executed on a superconducting processor. Our benchmarking protocol features an efficient adversarial attack algorithm tailored for quantum hardware, enabling the diagnosis of QNN’s robustness across diverse datasets. The empirical upper bound extracted from our attack experiments deviates by only 3 × 10−3 from the analytical lower bound, providing strong experimental confirmation of our attack’s precision and the tightness of the fidelity-based robustness bounds. Furthermore, our quantitative analysis reveals that adversarial training mitigates sensitivity to targeted perturbations by regularizing input gradients, thereby significantly enhancing QNN robustness. Additionally, we observe that experimentally measured QNNs exhibit higher adversarial robustness than classical neural networks, an effect attributed to inherent quantum noise. Our work establishes the first scalable and experimentally accessible framework for robustness benchmarking, paving the way for secure and reliable QML applications.

量子神经网络在超导量子处理器上的鲁棒性测试
量子机器学习(QML)模型与经典模型一样,本质上容易受到对抗性攻击,从而阻碍了它们的安全部署。在这里,我们报告了在超导处理器上执行的20量子位量子神经网络(QNN)分类器的鲁棒性的第一个系统实验基准。我们的基准测试协议具有为量子硬件量身定制的高效对抗性攻击算法,能够在不同数据集上诊断QNN的鲁棒性。从我们的攻击实验中提取的经验上界与分析下界的偏差仅为3 × 10−3,这为我们的攻击精度和基于保真度的鲁棒性边界的严密性提供了强有力的实验证实。此外,我们的定量分析表明,对抗性训练通过正则化输入梯度减轻了对目标扰动的敏感性,从而显著增强了QNN的鲁棒性。此外,我们观察到实验测量的量子神经网络比经典神经网络表现出更高的对抗鲁棒性,这一效应归因于固有的量子噪声。我们的工作为稳健性基准测试建立了第一个可扩展和实验可访问的框架,为安全可靠的QML应用程序铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Science China Physics, Mechanics & Astronomy
Science China Physics, Mechanics & Astronomy PHYSICS, MULTIDISCIPLINARY-
CiteScore
10.30
自引率
6.20%
发文量
4047
审稿时长
3 months
期刊介绍: Science China Physics, Mechanics & Astronomy, an academic journal cosponsored by the Chinese Academy of Sciences and the National Natural Science Foundation of China, and published by Science China Press, is committed to publishing high-quality, original results in both basic and applied research. Science China Physics, Mechanics & Astronomy, is published in both print and electronic forms. It is indexed by Science Citation Index. Categories of articles: Reviews summarize representative results and achievements in a particular topic or an area, comment on the current state of research, and advise on the research directions. The author’s own opinion and related discussion is requested. Research papers report on important original results in all areas of physics, mechanics and astronomy. Brief reports present short reports in a timely manner of the latest important results.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书