{"title":"Quantum neural networks form Gaussian processes","authors":"Diego García-Martín, Martín Larocca, M. Cerezo","doi":"10.1038/s41567-025-02883-z","DOIUrl":null,"url":null,"abstract":"<p>Classical artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of a large number of neurons per hidden layer. This correspondence plays an important role in the current understanding of the capabilities of neural networks. Here we prove an analogous result for quantum neural networks. We show that the outputs of certain models based on Haar-random unitary or orthogonal quantum neural networks converge to Gaussian processes in the limit of large Hilbert space dimension <i>d</i>. The derivation of this result is more nuanced than in the classical case due to the role played by the input states, the measurement observable and because the entries of unitary matrices are not independent. We show that the efficiency of predicting measurements at the output of a quantum neural network using Gaussian process regression depends on the number of measured qubits. Furthermore, our theorems imply that the concentration of measure phenomenon in Haar-random quantum neural networks is worse than previously thought, because expectation values and gradients concentrate as <span>\\({\\mathcal{O}}\\left({1}/{\\operatorname{e}^{d}\\sqrt{d}}\\right)\\)</span>.</p>","PeriodicalId":19100,"journal":{"name":"Nature Physics","volume":"32 1","pages":""},"PeriodicalIF":17.6000,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Physics","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1038/s41567-025-02883-z","RegionNum":1,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Classical artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of a large number of neurons per hidden layer. This correspondence plays an important role in the current understanding of the capabilities of neural networks. Here we prove an analogous result for quantum neural networks. We show that the outputs of certain models based on Haar-random unitary or orthogonal quantum neural networks converge to Gaussian processes in the limit of large Hilbert space dimension d. The derivation of this result is more nuanced than in the classical case due to the role played by the input states, the measurement observable and because the entries of unitary matrices are not independent. We show that the efficiency of predicting measurements at the output of a quantum neural network using Gaussian process regression depends on the number of measured qubits. Furthermore, our theorems imply that the concentration of measure phenomenon in Haar-random quantum neural networks is worse than previously thought, because expectation values and gradients concentrate as \({\mathcal{O}}\left({1}/{\operatorname{e}^{d}\sqrt{d}}\right)\).
期刊介绍:
Nature Physics is dedicated to publishing top-tier original research in physics with a fair and rigorous review process. It provides high visibility and access to a broad readership, maintaining high standards in copy editing and production, ensuring rapid publication, and maintaining independence from academic societies and other vested interests.
The journal presents two main research paper formats: Letters and Articles. Alongside primary research, Nature Physics serves as a central source for valuable information within the physics community through Review Articles, News & Views, Research Highlights covering crucial developments across the physics literature, Commentaries, Book Reviews, and Correspondence.