Least squares formulations for some elliptic second order problems, feedforward neural network solutions and convergence results

Jerome Pousin
{"title":"Least squares formulations for some elliptic second order problems, feedforward neural network solutions and convergence results","authors":"Jerome Pousin","doi":"10.1016/j.jcmds.2022.100023","DOIUrl":null,"url":null,"abstract":"<div><p>Recently some neural networks have been proposed for computing approximate solutions to partial differential equations. For second order elliptic or parabolic PDEs, this is possible by using penalized Least squares formulations of PDEs. In this article, for some second order elliptic PDEs we propose a theoretical setting, and we investigate the abstract convergence results between the solution and the computed one with neural networks. These results are obtained by minimizing appropriate loss functions made of a least squares formulation of the PDE augmented with a penalization term for accounting the Dirichlet boundary conditions. More precisely, it is shown that the error has two components, one due to the neural network and one due to the way the boundary conditions are imposed (via a penalization technic). The interplay between the two errors shows that the accuracy of the neural network has to be chosen accordingly with the accuracy of the boundary conditions.</p></div>","PeriodicalId":100768,"journal":{"name":"Journal of Computational Mathematics and Data Science","volume":"2 ","pages":"Article 100023"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772415822000013/pdfft?md5=1f9531f8460690468524068d54e3add7&pid=1-s2.0-S2772415822000013-main.pdf","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Mathematics and Data Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772415822000013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Recently some neural networks have been proposed for computing approximate solutions to partial differential equations. For second order elliptic or parabolic PDEs, this is possible by using penalized Least squares formulations of PDEs. In this article, for some second order elliptic PDEs we propose a theoretical setting, and we investigate the abstract convergence results between the solution and the computed one with neural networks. These results are obtained by minimizing appropriate loss functions made of a least squares formulation of the PDE augmented with a penalization term for accounting the Dirichlet boundary conditions. More precisely, it is shown that the error has two components, one due to the neural network and one due to the way the boundary conditions are imposed (via a penalization technic). The interplay between the two errors shows that the accuracy of the neural network has to be chosen accordingly with the accuracy of the boundary conditions.

若干椭圆型二阶问题的最小二乘公式,前馈神经网络解及收敛结果
近年来,人们提出了一些神经网络来计算偏微分方程的近似解。对于二阶椭圆型或抛物型偏微分方程,可以使用偏微分方程的惩罚最小二乘公式。本文给出了一类二阶椭圆偏微分方程的理论解,并利用神经网络研究了其解与计算解之间的抽象收敛结果。这些结果是通过最小化适当的损失函数得到的,这些损失函数是由PDE的最小二乘公式构成的,该公式增广了用于考虑Dirichlet边界条件的惩罚项。更准确地说,它表明误差有两个组成部分,一个是由于神经网络,另一个是由于施加边界条件的方式(通过惩罚技术)。这两种误差之间的相互作用表明,神经网络的精度必须根据边界条件的精度来选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.00
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信