针对混合经典量子神经网络的后门攻击

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ji Guo , Wenbo Jiang , Rui Zhang , Wenshu Fan , Jiachen Li , Guoming Lu , Hongwei Li
{"title":"针对混合经典量子神经网络的后门攻击","authors":"Ji Guo ,&nbsp;Wenbo Jiang ,&nbsp;Rui Zhang ,&nbsp;Wenshu Fan ,&nbsp;Jiachen Li ,&nbsp;Guoming Lu ,&nbsp;Hongwei Li","doi":"10.1016/j.neunet.2025.107776","DOIUrl":null,"url":null,"abstract":"<div><div>Hybrid Classical-Quantum Neural Networks (HQNNs) represent a promising advancement in Quantum Machine Learning (QML), yet their security has been rarely explored. In this paper, we present the first systematic study of backdoor attacks on HQNNs. We begin by proposing an attack framework and providing a theoretical analysis of the generalization bounds and minimum perturbation requirements for backdoor attacks on HQNNs. Next, we employ two classic backdoor attack methods on HQNNs and Convolutional Neural Networks (CNNs) to further investigate the robustness of HQNNs. Our experimental results demonstrate that HQNNs are more robust than CNNs, requiring more significant image modifications for successful attacks. Additionally, we introduce the Qcolor backdoor, which utilizes color shifts as triggers and employs the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to optimize hyperparameters. Through extensive experiments, we demonstrate the effectiveness, stealthiness, and robustness of the Qcolor backdoor.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"191 ","pages":"Article 107776"},"PeriodicalIF":6.3000,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Backdoor attacks against Hybrid Classical-Quantum Neural Networks\",\"authors\":\"Ji Guo ,&nbsp;Wenbo Jiang ,&nbsp;Rui Zhang ,&nbsp;Wenshu Fan ,&nbsp;Jiachen Li ,&nbsp;Guoming Lu ,&nbsp;Hongwei Li\",\"doi\":\"10.1016/j.neunet.2025.107776\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Hybrid Classical-Quantum Neural Networks (HQNNs) represent a promising advancement in Quantum Machine Learning (QML), yet their security has been rarely explored. In this paper, we present the first systematic study of backdoor attacks on HQNNs. We begin by proposing an attack framework and providing a theoretical analysis of the generalization bounds and minimum perturbation requirements for backdoor attacks on HQNNs. Next, we employ two classic backdoor attack methods on HQNNs and Convolutional Neural Networks (CNNs) to further investigate the robustness of HQNNs. Our experimental results demonstrate that HQNNs are more robust than CNNs, requiring more significant image modifications for successful attacks. Additionally, we introduce the Qcolor backdoor, which utilizes color shifts as triggers and employs the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to optimize hyperparameters. Through extensive experiments, we demonstrate the effectiveness, stealthiness, and robustness of the Qcolor backdoor.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"191 \",\"pages\":\"Article 107776\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025006562\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025006562","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

混合经典-量子神经网络(HQNNs)代表了量子机器学习(QML)的一个有前途的进步,但其安全性很少被探索。在本文中,我们首次对hqnn的后门攻击进行了系统研究。我们首先提出了一个攻击框架,并对hqnn后门攻击的泛化界和最小扰动要求进行了理论分析。接下来,我们采用两种经典的后门攻击方法对hqnn和卷积神经网络(cnn)进行攻击,进一步研究hqnn的鲁棒性。我们的实验结果表明,hqnn比cnn更具鲁棒性,需要更大的图像修改才能成功攻击。此外,我们还引入了Qcolor后门,该后门利用颜色偏移作为触发器,并采用非支配排序遗传算法II (NSGA-II)来优化超参数。通过大量的实验,我们证明了Qcolor后门的有效性,隐蔽性和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Backdoor attacks against Hybrid Classical-Quantum Neural Networks
Hybrid Classical-Quantum Neural Networks (HQNNs) represent a promising advancement in Quantum Machine Learning (QML), yet their security has been rarely explored. In this paper, we present the first systematic study of backdoor attacks on HQNNs. We begin by proposing an attack framework and providing a theoretical analysis of the generalization bounds and minimum perturbation requirements for backdoor attacks on HQNNs. Next, we employ two classic backdoor attack methods on HQNNs and Convolutional Neural Networks (CNNs) to further investigate the robustness of HQNNs. Our experimental results demonstrate that HQNNs are more robust than CNNs, requiring more significant image modifications for successful attacks. Additionally, we introduce the Qcolor backdoor, which utilizes color shifts as triggers and employs the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to optimize hyperparameters. Through extensive experiments, we demonstrate the effectiveness, stealthiness, and robustness of the Qcolor backdoor.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信