利用全交映映射学习广义哈密顿数

Harsh Choudhary, Chandan Gupta, Vyacheslav kungrutsev, Melvin Leok, Georgios Korpas
{"title":"利用全交映映射学习广义哈密顿数","authors":"Harsh Choudhary, Chandan Gupta, Vyacheslav kungrutsev, Melvin Leok, Georgios Korpas","doi":"arxiv-2409.11138","DOIUrl":null,"url":null,"abstract":"Many important physical systems can be described as the evolution of a\nHamiltonian system, which has the important property of being conservative,\nthat is, energy is conserved throughout the evolution. Physics Informed Neural\nNetworks and in particular Hamiltonian Neural Networks have emerged as a\nmechanism to incorporate structural inductive bias into the NN model. By\nensuring physical invariances are conserved, the models exhibit significantly\nbetter sample complexity and out-of-distribution accuracy than standard NNs.\nLearning the Hamiltonian as a function of its canonical variables, typically\nposition and velocity, from sample observations of the system thus becomes a\ncritical task in system identification and long-term prediction of system\nbehavior. However, to truly preserve the long-run physical conservation\nproperties of Hamiltonian systems, one must use symplectic integrators for a\nforward pass of the system's simulation. While symplectic schemes have been\nused in the literature, they are thus far limited to situations when they\nreduce to explicit algorithms, which include the case of separable Hamiltonians\nor augmented non-separable Hamiltonians. We extend it to generalized\nnon-separable Hamiltonians, and noting the self-adjoint property of symplectic\nintegrators, we bypass computationally intensive backpropagation through an ODE\nsolver. We show that the method is robust to noise and provides a good\napproximation of the system Hamiltonian when the state variables are sampled\nfrom a noisy observation. In the numerical results, we show the performance of\nthe method concerning Hamiltonian reconstruction and conservation, indicating\nits particular advantage for non-separable systems.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning Generalized Hamiltonians using fully Symplectic Mappings\",\"authors\":\"Harsh Choudhary, Chandan Gupta, Vyacheslav kungrutsev, Melvin Leok, Georgios Korpas\",\"doi\":\"arxiv-2409.11138\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many important physical systems can be described as the evolution of a\\nHamiltonian system, which has the important property of being conservative,\\nthat is, energy is conserved throughout the evolution. Physics Informed Neural\\nNetworks and in particular Hamiltonian Neural Networks have emerged as a\\nmechanism to incorporate structural inductive bias into the NN model. By\\nensuring physical invariances are conserved, the models exhibit significantly\\nbetter sample complexity and out-of-distribution accuracy than standard NNs.\\nLearning the Hamiltonian as a function of its canonical variables, typically\\nposition and velocity, from sample observations of the system thus becomes a\\ncritical task in system identification and long-term prediction of system\\nbehavior. However, to truly preserve the long-run physical conservation\\nproperties of Hamiltonian systems, one must use symplectic integrators for a\\nforward pass of the system's simulation. While symplectic schemes have been\\nused in the literature, they are thus far limited to situations when they\\nreduce to explicit algorithms, which include the case of separable Hamiltonians\\nor augmented non-separable Hamiltonians. We extend it to generalized\\nnon-separable Hamiltonians, and noting the self-adjoint property of symplectic\\nintegrators, we bypass computationally intensive backpropagation through an ODE\\nsolver. We show that the method is robust to noise and provides a good\\napproximation of the system Hamiltonian when the state variables are sampled\\nfrom a noisy observation. In the numerical results, we show the performance of\\nthe method concerning Hamiltonian reconstruction and conservation, indicating\\nits particular advantage for non-separable systems.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11138\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11138","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

许多重要的物理系统都可以描述为哈密尔顿系统的演化过程,而哈密尔顿系统具有保守的重要特性,即在整个演化过程中能量是守恒的。物理信息神经网络,尤其是哈密顿神经网络,是将结构归纳偏差纳入神经网络模型的一种机制。通过确保物理不变性得到保留,这些模型的样本复杂性和分布外准确性明显优于标准神经网络。因此,从系统的样本观测中学习哈密顿函数作为其典型变量(通常是位置和速度)的函数,成为系统识别和系统行为长期预测的关键任务。然而,要真正保持哈密顿系统的长期物理守恒特性,就必须使用交映积分器对系统进行前向模拟。虽然文献中已经使用了交映方案,但迄今为止,它们仅限于简化为显式算法的情况,其中包括可分离哈密顿或增强非可分离哈密顿的情况。我们将其扩展到广义的非可分哈密顿,并注意到交点积分器的自交特性,通过一个 ODE 求解器绕过了计算密集的反向传播。我们证明了该方法对噪声的鲁棒性,并在从噪声观测中对状态变量进行采样时,提供了系统哈密顿的良好近似值。在数值结果中,我们展示了该方法在哈密顿重构和守恒方面的性能,显示了它在非分离系统中的特殊优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Learning Generalized Hamiltonians using fully Symplectic Mappings
Many important physical systems can be described as the evolution of a Hamiltonian system, which has the important property of being conservative, that is, energy is conserved throughout the evolution. Physics Informed Neural Networks and in particular Hamiltonian Neural Networks have emerged as a mechanism to incorporate structural inductive bias into the NN model. By ensuring physical invariances are conserved, the models exhibit significantly better sample complexity and out-of-distribution accuracy than standard NNs. Learning the Hamiltonian as a function of its canonical variables, typically position and velocity, from sample observations of the system thus becomes a critical task in system identification and long-term prediction of system behavior. However, to truly preserve the long-run physical conservation properties of Hamiltonian systems, one must use symplectic integrators for a forward pass of the system's simulation. While symplectic schemes have been used in the literature, they are thus far limited to situations when they reduce to explicit algorithms, which include the case of separable Hamiltonians or augmented non-separable Hamiltonians. We extend it to generalized non-separable Hamiltonians, and noting the self-adjoint property of symplectic integrators, we bypass computationally intensive backpropagation through an ODE solver. We show that the method is robust to noise and provides a good approximation of the system Hamiltonian when the state variables are sampled from a noisy observation. In the numerical results, we show the performance of the method concerning Hamiltonian reconstruction and conservation, indicating its particular advantage for non-separable systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信