Efficient optimization of variational autoregressive networks with natural gradient.

IF 2.2 3区 物理与天体物理 Q2 PHYSICS, FLUIDS & PLASMAS
Jing Liu, Ying Tang, Pan Zhang
{"title":"Efficient optimization of variational autoregressive networks with natural gradient.","authors":"Jing Liu, Ying Tang, Pan Zhang","doi":"10.1103/PhysRevE.111.025304","DOIUrl":null,"url":null,"abstract":"<p><p>Estimating free energy is a fundamental problem in statistical mechanics. Recently, machine-learning-based methods, particularly the variational autoregressive networks (VANs) have been proposed to minimize variational free energy and to approximate the Boltzmann distribution. VAN enjoys notable advantages, including the exact computation of the normalized joint distribution and fast sampling, which are critical features often missing in Markov chain Monte Carlo algorithms. However, VAN also faces significant computational challenges. These include difficulties in the optimization of variational free energy in a complicated parameter space and slow convergence of learning. In this work, we introduce an optimization technique based on natural gradients to the VAN framework, namely ng-VAN, to enhance the learning efficiency and accuracy of the conventional VAN. The method has computational complexity cubic in the batch size rather than in the number of model parameters, hence it can be efficiently implemented for a large VAN model. We carried out extensive numerical experiments on the Sherrington-Kirkpatrick model, spin glasses on random graphs, and the two-dimensional Ising model. Our results indicate that compared with the conventional VAN, ng-VAN significantly improves the accuracy in estimating free energy and converges much faster with shorter learning time. This allows extending the VAN framework's applicability to challenging statistical mechanics problems that were previously not accessible.</p>","PeriodicalId":48698,"journal":{"name":"Physical Review E","volume":"111 2-2","pages":"025304"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physical Review E","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1103/PhysRevE.111.025304","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSICS, FLUIDS & PLASMAS","Score":null,"Total":0}
引用次数: 0

Abstract

Estimating free energy is a fundamental problem in statistical mechanics. Recently, machine-learning-based methods, particularly the variational autoregressive networks (VANs) have been proposed to minimize variational free energy and to approximate the Boltzmann distribution. VAN enjoys notable advantages, including the exact computation of the normalized joint distribution and fast sampling, which are critical features often missing in Markov chain Monte Carlo algorithms. However, VAN also faces significant computational challenges. These include difficulties in the optimization of variational free energy in a complicated parameter space and slow convergence of learning. In this work, we introduce an optimization technique based on natural gradients to the VAN framework, namely ng-VAN, to enhance the learning efficiency and accuracy of the conventional VAN. The method has computational complexity cubic in the batch size rather than in the number of model parameters, hence it can be efficiently implemented for a large VAN model. We carried out extensive numerical experiments on the Sherrington-Kirkpatrick model, spin glasses on random graphs, and the two-dimensional Ising model. Our results indicate that compared with the conventional VAN, ng-VAN significantly improves the accuracy in estimating free energy and converges much faster with shorter learning time. This allows extending the VAN framework's applicability to challenging statistical mechanics problems that were previously not accessible.

自然梯度变分自回归网络的高效优化。
估计自由能是统计力学中的一个基本问题。最近,人们提出了基于机器学习的方法,特别是变分自回归网络(VANs)来最小化变分自由能和近似玻尔兹曼分布。VAN具有明显的优势,包括归一化联合分布的精确计算和快速采样,这些都是马尔可夫链蒙特卡罗算法经常缺少的关键特征。然而,VAN也面临着重大的计算挑战。这些问题包括复杂参数空间中变分自由能优化的困难和学习的缓慢收敛。本文在VAN框架中引入了一种基于自然梯度的优化技术,即ng-VAN,以提高传统VAN的学习效率和准确性。该方法的计算复杂度在批处理大小上是立方的,而不是在模型参数的数量上,因此可以有效地实现大型VAN模型。我们对Sherrington-Kirkpatrick模型、随机图上的自旋玻璃和二维Ising模型进行了大量的数值实验。我们的研究结果表明,与传统的VAN相比,ng-VAN显著提高了估计自由能的精度,并且在更短的学习时间内收敛得更快。这允许扩展VAN框架的适用性,以解决以前无法解决的具有挑战性的统计力学问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Physical Review E
Physical Review E PHYSICS, FLUIDS & PLASMASPHYSICS, MATHEMAT-PHYSICS, MATHEMATICAL
CiteScore
4.50
自引率
16.70%
发文量
2110
期刊介绍: Physical Review E (PRE), broad and interdisciplinary in scope, focuses on collective phenomena of many-body systems, with statistical physics and nonlinear dynamics as the central themes of the journal. Physical Review E publishes recent developments in biological and soft matter physics including granular materials, colloids, complex fluids, liquid crystals, and polymers. The journal covers fluid dynamics and plasma physics and includes sections on computational and interdisciplinary physics, for example, complex networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信