{"title":"线性平流-反作用方程的最小二乘神经网络 (LSNN) 方法:不连续界面","authors":"Zhiqiang Cai, Junpyo Choi, Min Liu","doi":"10.1137/23m1568107","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Scientific Computing, Volume 46, Issue 4, Page C448-C478, August 2024. <br/> Abstract. We studied the least-squares ReLU neural network (LSNN) method for solving a linear advection-reaction equation with discontinuous solution in [Z. Cai et al., J. Comput. Phys., 443 (2021), 110514]. The method is based on a least-squares formulation and uses a new class of approximating functions: ReLU neural network (NN) functions. A critical and additional component of the LSNN method, differing from other NN-based methods, is the introduction of a properly designed and physics preserved discrete differential operator. In this paper, we study the LSNN method for problems with discontinuity interfaces. First, we show that ReLU NN functions with depth [math] can approximate any [math]-dimensional step function on a discontinuity interface generated by a vector field as streamlines with any prescribed accuracy. By decomposing the solution into continuous and discontinuous parts, we prove theoretically that the discretization error of the LSNN method using ReLU NN functions with depth [math] is mainly determined by the continuous part of the solution provided that the solution jump is constant. Numerical results for both two- and three-dimensional test problems with various discontinuity interfaces show that the LSNN method with enough layers is accurate and does not exhibit the common Gibbs phenomena along discontinuity interfaces.","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Least-Squares Neural Network (LSNN) Method for Linear Advection-Reaction Equation: Discontinuity Interface\",\"authors\":\"Zhiqiang Cai, Junpyo Choi, Min Liu\",\"doi\":\"10.1137/23m1568107\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Journal on Scientific Computing, Volume 46, Issue 4, Page C448-C478, August 2024. <br/> Abstract. We studied the least-squares ReLU neural network (LSNN) method for solving a linear advection-reaction equation with discontinuous solution in [Z. Cai et al., J. Comput. Phys., 443 (2021), 110514]. The method is based on a least-squares formulation and uses a new class of approximating functions: ReLU neural network (NN) functions. A critical and additional component of the LSNN method, differing from other NN-based methods, is the introduction of a properly designed and physics preserved discrete differential operator. In this paper, we study the LSNN method for problems with discontinuity interfaces. First, we show that ReLU NN functions with depth [math] can approximate any [math]-dimensional step function on a discontinuity interface generated by a vector field as streamlines with any prescribed accuracy. By decomposing the solution into continuous and discontinuous parts, we prove theoretically that the discretization error of the LSNN method using ReLU NN functions with depth [math] is mainly determined by the continuous part of the solution provided that the solution jump is constant. Numerical results for both two- and three-dimensional test problems with various discontinuity interfaces show that the LSNN method with enough layers is accurate and does not exhibit the common Gibbs phenomena along discontinuity interfaces.\",\"PeriodicalId\":3,\"journal\":{\"name\":\"ACS Applied Electronic Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Electronic Materials\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/23m1568107\",\"RegionNum\":3,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/23m1568107","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
SIAM 科学计算期刊》,第 46 卷第 4 期,第 C448-C478 页,2024 年 8 月。 摘要我们在[Z. Cai et al., J. Comput. Phys., 443 (2021), 110514]一文中研究了求解具有不连续解的线性平流反应方程的最小二乘 ReLU 神经网络(LSNN)方法。该方法基于最小二乘法,并使用了一类新的近似函数:ReLU 神经网络 (NN) 函数。与其他基于 NN 的方法不同,LSNN 方法的一个关键和额外的组成部分是引入了一个经过适当设计并保留了物理特性的离散微分算子。在本文中,我们研究了针对不连续界面问题的 LSNN 方法。首先,我们证明了深度为[math]的 ReLU NN 函数可以以任意规定的精度将矢量场产生的不连续界面上的任意[math]维阶跃函数近似为流线。通过将解分解为连续部分和不连续部分,我们从理论上证明了使用深度[数学]ReLU NN 函数的 LSNN 方法的离散化误差主要由解的连续部分决定,前提是解的跳跃是恒定的。对具有各种不连续界面的二维和三维测试问题的数值结果表明,具有足够多层次的 LSNN 方法是精确的,不会在不连续界面上出现常见的吉布斯现象。
Least-Squares Neural Network (LSNN) Method for Linear Advection-Reaction Equation: Discontinuity Interface
SIAM Journal on Scientific Computing, Volume 46, Issue 4, Page C448-C478, August 2024. Abstract. We studied the least-squares ReLU neural network (LSNN) method for solving a linear advection-reaction equation with discontinuous solution in [Z. Cai et al., J. Comput. Phys., 443 (2021), 110514]. The method is based on a least-squares formulation and uses a new class of approximating functions: ReLU neural network (NN) functions. A critical and additional component of the LSNN method, differing from other NN-based methods, is the introduction of a properly designed and physics preserved discrete differential operator. In this paper, we study the LSNN method for problems with discontinuity interfaces. First, we show that ReLU NN functions with depth [math] can approximate any [math]-dimensional step function on a discontinuity interface generated by a vector field as streamlines with any prescribed accuracy. By decomposing the solution into continuous and discontinuous parts, we prove theoretically that the discretization error of the LSNN method using ReLU NN functions with depth [math] is mainly determined by the continuous part of the solution provided that the solution jump is constant. Numerical results for both two- and three-dimensional test problems with various discontinuity interfaces show that the LSNN method with enough layers is accurate and does not exhibit the common Gibbs phenomena along discontinuity interfaces.