两个隐藏层对于物理信息的神经网络来说还足够吗?

IF 3.8 2区 物理与天体物理 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Vasiliy A. Es’kin , Alexey O. Malkhanov , Mikhail E. Smorkalov
{"title":"两个隐藏层对于物理信息的神经网络来说还足够吗?","authors":"Vasiliy A. Es’kin ,&nbsp;Alexey O. Malkhanov ,&nbsp;Mikhail E. Smorkalov","doi":"10.1016/j.jcp.2025.114085","DOIUrl":null,"url":null,"abstract":"<div><div>The article discusses the development of various methods and techniques for initializing and training neural networks with a single hidden layer, as well as training a separable physics-informed neural network consisting of neural networks with a single hidden layer to solve physical problems described by ordinary differential equations (ODEs) and partial differential equations (PDEs). A method for strictly deterministic initialization of a neural network with one hidden layer for solving physical problems described by an ODE is proposed. Modifications to existing methods for weighting the loss function (<span><math><mi>δ</mi></math></span>-causal training and gradient normalization) are given, as well as new methods developed for training strictly deterministic-initialized neural networks to solve ODEs (detaching, additional weighting based on the second derivative, predicted solution-based weighting, relative residuals). An algorithm for physics-informed data-driven initialization of a neural network with one hidden layer is proposed. A neural network with pronounced generalizing properties is presented, meaning that for unseen problem parameters it delivers the solution accuracy close to that of parameters seen in the training dataset. The generalizing abilities of such neural network can be precisely controlled by adjusting the neural network parameters. A metric for measuring the generalization of such neural network has been introduced. A gradient-free neuron-by-neuron (NbN) fitting method has been developed for adjusting the parameters of a single-hidden-layer neural network, which does not require the use of an optimizer or solver for its implementation. The proposed methods have been extended to 2D problems using the separable physics-informed neural networks (SPINN) approach. Numerous experiments have been carried out to develop the above methods and approaches. Experiments on physical problems, such as solving various ODEs and PDEs, have demonstrated that these methods for initializing and training neural networks with one or two hidden layers (SPINN) achieve competitive accuracy and, in some cases, state-of-the-art results.</div></div>","PeriodicalId":352,"journal":{"name":"Journal of Computational Physics","volume":"537 ","pages":"Article 114085"},"PeriodicalIF":3.8000,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Are two hidden layers still enough for the physics-informed neural networks?\",\"authors\":\"Vasiliy A. Es’kin ,&nbsp;Alexey O. Malkhanov ,&nbsp;Mikhail E. Smorkalov\",\"doi\":\"10.1016/j.jcp.2025.114085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The article discusses the development of various methods and techniques for initializing and training neural networks with a single hidden layer, as well as training a separable physics-informed neural network consisting of neural networks with a single hidden layer to solve physical problems described by ordinary differential equations (ODEs) and partial differential equations (PDEs). A method for strictly deterministic initialization of a neural network with one hidden layer for solving physical problems described by an ODE is proposed. Modifications to existing methods for weighting the loss function (<span><math><mi>δ</mi></math></span>-causal training and gradient normalization) are given, as well as new methods developed for training strictly deterministic-initialized neural networks to solve ODEs (detaching, additional weighting based on the second derivative, predicted solution-based weighting, relative residuals). An algorithm for physics-informed data-driven initialization of a neural network with one hidden layer is proposed. A neural network with pronounced generalizing properties is presented, meaning that for unseen problem parameters it delivers the solution accuracy close to that of parameters seen in the training dataset. The generalizing abilities of such neural network can be precisely controlled by adjusting the neural network parameters. A metric for measuring the generalization of such neural network has been introduced. A gradient-free neuron-by-neuron (NbN) fitting method has been developed for adjusting the parameters of a single-hidden-layer neural network, which does not require the use of an optimizer or solver for its implementation. The proposed methods have been extended to 2D problems using the separable physics-informed neural networks (SPINN) approach. Numerous experiments have been carried out to develop the above methods and approaches. Experiments on physical problems, such as solving various ODEs and PDEs, have demonstrated that these methods for initializing and training neural networks with one or two hidden layers (SPINN) achieve competitive accuracy and, in some cases, state-of-the-art results.</div></div>\",\"PeriodicalId\":352,\"journal\":{\"name\":\"Journal of Computational Physics\",\"volume\":\"537 \",\"pages\":\"Article 114085\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computational Physics\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0021999125003687\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Physics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0021999125003687","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

本文讨论了初始化和训练具有单隐层的神经网络的各种方法和技术的发展,以及训练由具有单隐层的神经网络组成的可分离物理信息神经网络,以解决由常微分方程(ode)和偏微分方程(PDEs)描述的物理问题。提出了一种求解由ODE描述的物理问题的单隐层神经网络的严格确定性初始化方法。对现有的损失函数加权方法(δ-因果训练和梯度归一化)进行了修改,并开发了用于训练严格确定性初始化神经网络求解ode的新方法(分离、基于二阶导数的附加加权、基于预测解的加权、相对残差)。提出了一种基于物理信息的单隐层神经网络初始化算法。提出了一个具有明显泛化特性的神经网络,这意味着对于看不见的问题参数,它提供的解决方案精度接近训练数据集中看到的参数。通过调整神经网络的参数,可以精确地控制神经网络的泛化能力。介绍了一种度量神经网络泛化程度的度量。提出了一种无梯度神经元逐神经元(NbN)拟合方法,用于调整单隐层神经网络的参数,该方法不需要使用优化器或求解器来实现。所提出的方法已经扩展到二维问题,使用可分离的物理信息神经网络(SPINN)方法。为了发展上述方法和途径,已经进行了大量的实验。在物理问题上的实验,例如求解各种ode和pde,已经证明了这些初始化和训练具有一个或两个隐藏层(SPINN)的神经网络的方法达到了相当的精度,并且在某些情况下,达到了最先进的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Are two hidden layers still enough for the physics-informed neural networks?
The article discusses the development of various methods and techniques for initializing and training neural networks with a single hidden layer, as well as training a separable physics-informed neural network consisting of neural networks with a single hidden layer to solve physical problems described by ordinary differential equations (ODEs) and partial differential equations (PDEs). A method for strictly deterministic initialization of a neural network with one hidden layer for solving physical problems described by an ODE is proposed. Modifications to existing methods for weighting the loss function (δ-causal training and gradient normalization) are given, as well as new methods developed for training strictly deterministic-initialized neural networks to solve ODEs (detaching, additional weighting based on the second derivative, predicted solution-based weighting, relative residuals). An algorithm for physics-informed data-driven initialization of a neural network with one hidden layer is proposed. A neural network with pronounced generalizing properties is presented, meaning that for unseen problem parameters it delivers the solution accuracy close to that of parameters seen in the training dataset. The generalizing abilities of such neural network can be precisely controlled by adjusting the neural network parameters. A metric for measuring the generalization of such neural network has been introduced. A gradient-free neuron-by-neuron (NbN) fitting method has been developed for adjusting the parameters of a single-hidden-layer neural network, which does not require the use of an optimizer or solver for its implementation. The proposed methods have been extended to 2D problems using the separable physics-informed neural networks (SPINN) approach. Numerous experiments have been carried out to develop the above methods and approaches. Experiments on physical problems, such as solving various ODEs and PDEs, have demonstrated that these methods for initializing and training neural networks with one or two hidden layers (SPINN) achieve competitive accuracy and, in some cases, state-of-the-art results.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Computational Physics
Journal of Computational Physics 物理-计算机:跨学科应用
CiteScore
7.60
自引率
14.60%
发文量
763
审稿时长
5.8 months
期刊介绍: Journal of Computational Physics thoroughly treats the computational aspects of physical problems, presenting techniques for the numerical solution of mathematical equations arising in all areas of physics. The journal seeks to emphasize methods that cross disciplinary boundaries. The Journal of Computational Physics also publishes short notes of 4 pages or less (including figures, tables, and references but excluding title pages). Letters to the Editor commenting on articles already published in this Journal will also be considered. Neither notes nor letters should have an abstract.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信