Laziness, Barren Plateau, and Noises in Machine Learning

Junyu Liu, Zexi Lin, L. Jiang
{"title":"Laziness, Barren Plateau, and Noises in Machine Learning","authors":"Junyu Liu, Zexi Lin, L. Jiang","doi":"10.1088/2632-2153/ad35a3","DOIUrl":null,"url":null,"abstract":"\n We define \\emph{laziness} to describe a large suppression of variational parameter updates for neural networks, classical or quantum. In the quantum case, the suppression is exponential in the number of qubits for randomized variational quantum circuits. We discuss the difference between laziness and \\emph{barren plateau} in quantum machine learning created by quantum physicists in \\cite{mcclean2018barren} for the flatness of the loss function landscape during gradient descent. We address a novel theoretical understanding of those two phenomena in light of the theory of neural tangent kernels. For noiseless quantum circuits, without the measurement noise, the loss function landscape is complicated in the overparametrized regime with a large number of trainable variational angles. Instead, around a random starting point in optimization, there are large numbers of local minima that are good enough and could minimize the mean square loss function, where we still have quantum laziness, but we do not have barren plateaus. However, the complicated landscape is not visible within a limited number of iterations, and low precision in quantum control and quantum sensing. Moreover, we look at the effect of noises during optimization by assuming intuitive noise models, and show that variational quantum algorithms are noise-resilient in the overparametrization regime. Our work precisely reformulates the quantum barren plateau statement towards a precision statement and justifies the statement in certain noise models, injects new hope toward near-term variational quantum algorithms, and provides theoretical connections toward classical machine learning.","PeriodicalId":503691,"journal":{"name":"Machine Learning: Science and Technology","volume":"62 11","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Learning: Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2632-2153/ad35a3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We define \emph{laziness} to describe a large suppression of variational parameter updates for neural networks, classical or quantum. In the quantum case, the suppression is exponential in the number of qubits for randomized variational quantum circuits. We discuss the difference between laziness and \emph{barren plateau} in quantum machine learning created by quantum physicists in \cite{mcclean2018barren} for the flatness of the loss function landscape during gradient descent. We address a novel theoretical understanding of those two phenomena in light of the theory of neural tangent kernels. For noiseless quantum circuits, without the measurement noise, the loss function landscape is complicated in the overparametrized regime with a large number of trainable variational angles. Instead, around a random starting point in optimization, there are large numbers of local minima that are good enough and could minimize the mean square loss function, where we still have quantum laziness, but we do not have barren plateaus. However, the complicated landscape is not visible within a limited number of iterations, and low precision in quantum control and quantum sensing. Moreover, we look at the effect of noises during optimization by assuming intuitive noise models, and show that variational quantum algorithms are noise-resilient in the overparametrization regime. Our work precisely reformulates the quantum barren plateau statement towards a precision statement and justifies the statement in certain noise models, injects new hope toward near-term variational quantum algorithms, and provides theoretical connections toward classical machine learning.
机器学习中的懒惰、贫瘠高原和噪音
我们定义了 \emph{laziness} 来描述神经网络(经典或量子)变异参数更新的巨大抑制作用。在量子情况下,对于随机变分量子电路来说,这种抑制与量子比特数呈指数关系。我们讨论了量子物理学家在梯度下降过程中为损失函数景观的平坦性而创造的量子机器学习中的懒惰和高原之间的区别。我们根据神经切核理论,从理论上对这两种现象进行了新的理解。对于没有测量噪声的无噪声量子电路,损失函数景观在过参数化机制中是复杂的,存在大量可训练的变角。相反,在优化的随机起点周围,存在大量足够好的局部极小值,可以使均方损失函数最小化,在这种情况下,我们仍然存在量子懒惰,但不会出现贫瘠的高原。然而,在有限的迭代次数内,复杂的景观并不明显,量子控制和量子传感的精度也很低。此外,我们通过假设直观的噪声模型来研究优化过程中噪声的影响,并证明变分量子算法在超参数化制度下具有抗噪声能力。我们的工作精确地将量子贫瘠高原声明重新表述为精度声明,并在某些噪声模型中证明了该声明的合理性,为近期的变分量子算法注入了新的希望,并为经典机器学习提供了理论联系。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信