V. A. Es’kin, D. V. Davydov, E. D. Egorova, A. O. Malkhanov, M. A. Akhukov, M. E. Smorkalov
{"title":"About Modifications of the Loss Function for the Causal Training of Physics-Informed Neural Networks","authors":"V. A. Es’kin, D. V. Davydov, E. D. Egorova, A. O. Malkhanov, M. A. Akhukov, M. E. Smorkalov","doi":"10.1134/S106456242460194X","DOIUrl":null,"url":null,"abstract":"<p>A method is presented that allows to reduce a problem described by differential equations with initial and boundary conditions to a problem described only by differential equations which encapsulate initial and boundary conditions. It becomes possible to represent the loss function for physics-informed neural networks (PINNs) methodology in the form of a single term associated with modified differential equations. Thus eliminating the need to tune the scaling coefficients for the terms of loss function related to boundary and initial conditions. The weighted loss functions respecting causality were modified and new weighted loss functions, based on generalized functions, are derived. Numerical experiments have been carried out for a number of problems, demonstrating the accuracy of the proposed approaches. The neural network architecture was proposed for the Korteweg–De Vries equation, which is more relevant for this problem under consideration, and it demonstrates superior extrapolation of the solution in the space-time domain where training was not performed.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"110 1 supplement","pages":"S172 - S192"},"PeriodicalIF":0.5000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1134/S106456242460194X.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Doklady Mathematics","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1134/S106456242460194X","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
A method is presented that allows to reduce a problem described by differential equations with initial and boundary conditions to a problem described only by differential equations which encapsulate initial and boundary conditions. It becomes possible to represent the loss function for physics-informed neural networks (PINNs) methodology in the form of a single term associated with modified differential equations. Thus eliminating the need to tune the scaling coefficients for the terms of loss function related to boundary and initial conditions. The weighted loss functions respecting causality were modified and new weighted loss functions, based on generalized functions, are derived. Numerical experiments have been carried out for a number of problems, demonstrating the accuracy of the proposed approaches. The neural network architecture was proposed for the Korteweg–De Vries equation, which is more relevant for this problem under consideration, and it demonstrates superior extrapolation of the solution in the space-time domain where training was not performed.
期刊介绍:
Doklady Mathematics is a journal of the Presidium of the Russian Academy of Sciences. It contains English translations of papers published in Doklady Akademii Nauk (Proceedings of the Russian Academy of Sciences), which was founded in 1933 and is published 36 times a year. Doklady Mathematics includes the materials from the following areas: mathematics, mathematical physics, computer science, control theory, and computers. It publishes brief scientific reports on previously unpublished significant new research in mathematics and its applications. The main contributors to the journal are Members of the RAS, Corresponding Members of the RAS, and scientists from the former Soviet Union and other foreign countries. Among the contributors are the outstanding Russian mathematicians.