{"title":"Periodic Hamiltonian Neural Networks","authors":"Zi-Yu Khoo;Dawen Wu;Jonathan Sze Choong Low;Stéphane Bressan","doi":"10.1109/TAI.2024.3515934","DOIUrl":null,"url":null,"abstract":"Modeling dynamical systems is a core challenge for science and engineering. Hamiltonian neural networks (HNNs) are state-of-the-art models that regress the vector field of a dynamical system under the learning bias of Hamilton's equations. A recent observation is that embedding biases regarding invariances of the Hamiltonian improve regression performance. One such invariance is the periodicity of the Hamiltonian, which improves extrapolation performance. We propose <italic>periodic HNNs</i> that embed periodicity within HNNs using observational, learning, and inductive biases. An observational bias is embedded by training the HNN on data that reflects the periodicity of the Hamiltonian. A learning bias is embedded through the loss function of the HNN. An inductive bias is embedded by a periodic activation function in the HNN. We evaluate the performance of the proposed models on interpolation and extrapolation problems that either assume knowledge of the periods a priori or learn the periods as parameters. We show that the proposed models can interpolate well but are far more effective than the HNN at extrapolating the Hamiltonian and the vector field for both problems and can even extrapolate the vector field of the chaotic double pendulum Hamiltonian system.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 5","pages":"1194-1202"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10817622/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Modeling dynamical systems is a core challenge for science and engineering. Hamiltonian neural networks (HNNs) are state-of-the-art models that regress the vector field of a dynamical system under the learning bias of Hamilton's equations. A recent observation is that embedding biases regarding invariances of the Hamiltonian improve regression performance. One such invariance is the periodicity of the Hamiltonian, which improves extrapolation performance. We propose periodic HNNs that embed periodicity within HNNs using observational, learning, and inductive biases. An observational bias is embedded by training the HNN on data that reflects the periodicity of the Hamiltonian. A learning bias is embedded through the loss function of the HNN. An inductive bias is embedded by a periodic activation function in the HNN. We evaluate the performance of the proposed models on interpolation and extrapolation problems that either assume knowledge of the periods a priori or learn the periods as parameters. We show that the proposed models can interpolate well but are far more effective than the HNN at extrapolating the Hamiltonian and the vector field for both problems and can even extrapolate the vector field of the chaotic double pendulum Hamiltonian system.