Backpropagation scaling in parameterised quantum circuits

IF 5.1 2区 物理与天体物理 Q1 PHYSICS, MULTIDISCIPLINARY
Quantum Pub Date : 2025-10-02 DOI:10.22331/q-2025-10-02-1873
Joseph Bowles, David Wierichs, Chae-Yeun Park
{"title":"Backpropagation scaling in parameterised quantum circuits","authors":"Joseph Bowles, David Wierichs, Chae-Yeun Park","doi":"10.22331/q-2025-10-02-1873","DOIUrl":null,"url":null,"abstract":"The discovery of the backpropagation algorithm ranks among one of the most important moments in the history of machine learning, and has made possible the training of large-scale neural networks through its ability to compute gradients at roughly the same computational cost as model evaluation. Despite its importance, a similar backpropagation-like scaling for gradient evaluation of parameterised quantum circuits has remained elusive. Currently, the most popular method requires sampling from a number of circuits that scales with the number of circuit parameters, making training of large-scale quantum circuits prohibitively expensive in practice. Here we address this problem by introducing a class of structured circuits that are not known to be classically simulable and admit gradient estimation with significantly fewer circuits. In the simplest case – for which the parameters feed into commuting quantum gates – these circuits allow for fast estimation of the gradient, higher order partial derivatives and the Fisher information matrix. Moreover, specific families of parameterised circuits exist for which the scaling of gradient estimation is in line with classical backpropagation, and can thus be trained at scale. In a toy classification problem on 16 qubits, such circuits show competitive performance with other methods, while reducing the training cost by about two orders of magnitude.","PeriodicalId":20807,"journal":{"name":"Quantum","volume":"8 1","pages":""},"PeriodicalIF":5.1000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quantum","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.22331/q-2025-10-02-1873","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

The discovery of the backpropagation algorithm ranks among one of the most important moments in the history of machine learning, and has made possible the training of large-scale neural networks through its ability to compute gradients at roughly the same computational cost as model evaluation. Despite its importance, a similar backpropagation-like scaling for gradient evaluation of parameterised quantum circuits has remained elusive. Currently, the most popular method requires sampling from a number of circuits that scales with the number of circuit parameters, making training of large-scale quantum circuits prohibitively expensive in practice. Here we address this problem by introducing a class of structured circuits that are not known to be classically simulable and admit gradient estimation with significantly fewer circuits. In the simplest case – for which the parameters feed into commuting quantum gates – these circuits allow for fast estimation of the gradient, higher order partial derivatives and the Fisher information matrix. Moreover, specific families of parameterised circuits exist for which the scaling of gradient estimation is in line with classical backpropagation, and can thus be trained at scale. In a toy classification problem on 16 qubits, such circuits show competitive performance with other methods, while reducing the training cost by about two orders of magnitude.
参数化量子电路中的反向传播缩放
反向传播算法的发现是机器学习历史上最重要的时刻之一,并且通过其以与模型评估大致相同的计算成本计算梯度的能力,使大规模神经网络的训练成为可能。尽管它很重要,但对于参数化量子电路的梯度评估,类似的反向传播缩放仍然难以捉摸。目前,最流行的方法需要从许多电路中采样,这些电路随电路参数的数量而缩放,这使得大规模量子电路的训练在实践中非常昂贵。在这里,我们通过引入一类结构化电路来解决这个问题,这些电路不知道是经典可模拟的,并且允许用更少的电路进行梯度估计。在最简单的情况下——参数输入交换量子门——这些电路允许对梯度、高阶偏导数和费雪信息矩阵进行快速估计。此外,存在特定的参数化电路族,其梯度估计的缩放符合经典的反向传播,因此可以在规模上进行训练。在一个16量子位的玩具分类问题中,这种电路表现出与其他方法竞争的性能,同时将训练成本降低了大约两个数量级。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Quantum
Quantum Physics and Astronomy-Physics and Astronomy (miscellaneous)
CiteScore
9.20
自引率
10.90%
发文量
241
审稿时长
16 weeks
期刊介绍: Quantum is an open-access peer-reviewed journal for quantum science and related fields. Quantum is non-profit and community-run: an effort by researchers and for researchers to make science more open and publishing more transparent and efficient.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信