量子机器学习中的隐私特征

IF 6.6 1区 物理与天体物理 Q1 PHYSICS, APPLIED
Jamie Heredge, Niraj Kumar, Dylan Herman, Shouvanik Chakrabarti, Romina Yalovetzky, Shree Hari Sureshbabu, Changhao Li, Marco Pistoia
{"title":"量子机器学习中的隐私特征","authors":"Jamie Heredge, Niraj Kumar, Dylan Herman, Shouvanik Chakrabarti, Romina Yalovetzky, Shree Hari Sureshbabu, Changhao Li, Marco Pistoia","doi":"10.1038/s41534-025-01022-z","DOIUrl":null,"url":null,"abstract":"<p>Ensuring data privacy in machine learning models is critical, especially in distributed settings where model gradients are shared among multiple parties for collaborative learning. Motivated by the increasing success of recovering input data from the gradients of classical models, this study investigates the analogous challenge for variational quantum circuits (VQC) as quantum machine learning models. We highlight the crucial role of the dynamical Lie algebra (DLA) in determining privacy vulnerabilities. While the DLA has been linked to the trainability and simulatability of VQC models, we establish its connection to privacy for the first time. We show that properties conducive to VQC trainability, such as a polynomial-sized DLA, also facilitate extracting detailed snapshots of the input, posing a weak privacy breach. We further investigate conditions for a strong privacy breach, where original input data can be recovered from snapshots by classical or quantum-assisted methods. We establish properties of the encoding map, such as classical simulatability, overlap with DLA basis, and its Fourier frequency characteristics that enable such a privacy breach of VQC models. Our framework thus guides the design of quantum machine learning models, balancing trainability and robust privacy protection.</p>","PeriodicalId":19212,"journal":{"name":"npj Quantum Information","volume":"15 1","pages":""},"PeriodicalIF":6.6000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Characterizing privacy in quantum machine learning\",\"authors\":\"Jamie Heredge, Niraj Kumar, Dylan Herman, Shouvanik Chakrabarti, Romina Yalovetzky, Shree Hari Sureshbabu, Changhao Li, Marco Pistoia\",\"doi\":\"10.1038/s41534-025-01022-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Ensuring data privacy in machine learning models is critical, especially in distributed settings where model gradients are shared among multiple parties for collaborative learning. Motivated by the increasing success of recovering input data from the gradients of classical models, this study investigates the analogous challenge for variational quantum circuits (VQC) as quantum machine learning models. We highlight the crucial role of the dynamical Lie algebra (DLA) in determining privacy vulnerabilities. While the DLA has been linked to the trainability and simulatability of VQC models, we establish its connection to privacy for the first time. We show that properties conducive to VQC trainability, such as a polynomial-sized DLA, also facilitate extracting detailed snapshots of the input, posing a weak privacy breach. We further investigate conditions for a strong privacy breach, where original input data can be recovered from snapshots by classical or quantum-assisted methods. We establish properties of the encoding map, such as classical simulatability, overlap with DLA basis, and its Fourier frequency characteristics that enable such a privacy breach of VQC models. Our framework thus guides the design of quantum machine learning models, balancing trainability and robust privacy protection.</p>\",\"PeriodicalId\":19212,\"journal\":{\"name\":\"npj Quantum Information\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2025-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"npj Quantum Information\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.1038/s41534-025-01022-z\",\"RegionNum\":1,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PHYSICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"npj Quantum Information","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1038/s41534-025-01022-z","RegionNum":1,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PHYSICS, APPLIED","Score":null,"Total":0}
引用次数: 0

摘要

确保机器学习模型中的数据隐私至关重要,特别是在分布式设置中,模型梯度在多方之间共享以进行协作学习。由于从经典模型的梯度中恢复输入数据越来越成功,本研究探讨了变分量子电路(VQC)作为量子机器学习模型的类似挑战。我们强调动态李代数(DLA)在确定隐私漏洞中的关键作用。虽然DLA与VQC模型的可训练性和可模拟性有关,但我们首次将其与隐私联系起来。我们表明,有利于VQC可训练性的属性,如多项式大小的DLA,也有助于提取输入的详细快照,构成弱隐私泄露。我们进一步研究了严重隐私泄露的条件,其中原始输入数据可以通过经典或量子辅助方法从快照中恢复。我们建立了编码映射的属性,如经典的可模拟性,与DLA基的重叠,以及它的傅立叶频率特性,使VQC模型的隐私泄露成为可能。因此,我们的框架指导量子机器学习模型的设计,平衡可训练性和健壮的隐私保护。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Characterizing privacy in quantum machine learning

Characterizing privacy in quantum machine learning

Ensuring data privacy in machine learning models is critical, especially in distributed settings where model gradients are shared among multiple parties for collaborative learning. Motivated by the increasing success of recovering input data from the gradients of classical models, this study investigates the analogous challenge for variational quantum circuits (VQC) as quantum machine learning models. We highlight the crucial role of the dynamical Lie algebra (DLA) in determining privacy vulnerabilities. While the DLA has been linked to the trainability and simulatability of VQC models, we establish its connection to privacy for the first time. We show that properties conducive to VQC trainability, such as a polynomial-sized DLA, also facilitate extracting detailed snapshots of the input, posing a weak privacy breach. We further investigate conditions for a strong privacy breach, where original input data can be recovered from snapshots by classical or quantum-assisted methods. We establish properties of the encoding map, such as classical simulatability, overlap with DLA basis, and its Fourier frequency characteristics that enable such a privacy breach of VQC models. Our framework thus guides the design of quantum machine learning models, balancing trainability and robust privacy protection.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
npj Quantum Information
npj Quantum Information Computer Science-Computer Science (miscellaneous)
CiteScore
13.70
自引率
3.90%
发文量
130
审稿时长
29 weeks
期刊介绍: The scope of npj Quantum Information spans across all relevant disciplines, fields, approaches and levels and so considers outstanding work ranging from fundamental research to applications and technologies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信