A nonoverlapping domain decomposition method for extreme learning machines: Elliptic problems

IF 2.9 2区 数学 Q1 MATHEMATICS, APPLIED
Chang-Ock Lee , Youngkyu Lee , Byungeun Ryoo
{"title":"A nonoverlapping domain decomposition method for extreme learning machines: Elliptic problems","authors":"Chang-Ock Lee ,&nbsp;Youngkyu Lee ,&nbsp;Byungeun Ryoo","doi":"10.1016/j.camwa.2025.04.001","DOIUrl":null,"url":null,"abstract":"<div><div>Extreme learning machine (ELM) is a methodology for solving partial differential equations (PDEs) using a single hidden layer feed-forward neural network. It presets the weight/bias coefficients in the hidden layer with random values, which remain fixed throughout the computation, and uses a linear least squares method for training the parameters of the output layer of the neural network. It is known to be much faster than Physics informed neural networks. However, classical ELM is still computationally expensive when a high level of representation is desired in the solution as this requires solving a large least squares system. In this paper, we propose a nonoverlapping domain decomposition method (DDM) for ELMs that not only reduces the training time of ELMs, but is also suitable for parallel computation. We introduce local neural networks, which are valid only at corresponding subdomains, and an auxiliary variable at the interface. We construct a system on the variable and the parameters of local neural networks. A Schur complement system on the interface can be derived by eliminating the parameters of the output layer. The auxiliary variable is then directly obtained by solving the reduced system after which the parameters for each local neural network are solved in parallel. A method for initializing the hidden layer parameters suitable for high approximation quality in large systems is also proposed. Numerical results that verify the acceleration performance of the proposed method with respect to the number of subdomains are presented.</div></div>","PeriodicalId":55218,"journal":{"name":"Computers & Mathematics with Applications","volume":"189 ","pages":"Pages 109-128"},"PeriodicalIF":2.9000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Mathematics with Applications","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0898122125001403","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

Extreme learning machine (ELM) is a methodology for solving partial differential equations (PDEs) using a single hidden layer feed-forward neural network. It presets the weight/bias coefficients in the hidden layer with random values, which remain fixed throughout the computation, and uses a linear least squares method for training the parameters of the output layer of the neural network. It is known to be much faster than Physics informed neural networks. However, classical ELM is still computationally expensive when a high level of representation is desired in the solution as this requires solving a large least squares system. In this paper, we propose a nonoverlapping domain decomposition method (DDM) for ELMs that not only reduces the training time of ELMs, but is also suitable for parallel computation. We introduce local neural networks, which are valid only at corresponding subdomains, and an auxiliary variable at the interface. We construct a system on the variable and the parameters of local neural networks. A Schur complement system on the interface can be derived by eliminating the parameters of the output layer. The auxiliary variable is then directly obtained by solving the reduced system after which the parameters for each local neural network are solved in parallel. A method for initializing the hidden layer parameters suitable for high approximation quality in large systems is also proposed. Numerical results that verify the acceleration performance of the proposed method with respect to the number of subdomains are presented.
极值学习机的非重叠区域分解方法:椭圆型问题
极限学习机(ELM)是一种利用单隐层前馈神经网络求解偏微分方程的方法。该方法将隐层的权重/偏置系数预设为随机值,在整个计算过程中保持固定,并使用线性最小二乘法训练神经网络输出层的参数。众所周知,它比物理神经网络快得多。然而,当在解决方案中需要高水平的表示时,经典的ELM仍然是计算昂贵的,因为这需要解决一个大的最小二乘系统。本文提出了一种elm的无重叠域分解方法(DDM),该方法不仅减少了elm的训练时间,而且适合于并行计算。我们引入局部神经网络,它只在相应的子域有效,并在接口处引入辅助变量。我们在局部神经网络的变量和参数上构造了一个系统。通过消去输出层的参数,可以得到接口上的舒尔补系统。通过求解简化后的系统直接得到辅助变量,然后并行求解各局部神经网络的参数。提出了一种适合大系统高逼近质量的隐层参数初始化方法。数值结果验证了该方法在子域数量上的加速性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Mathematics with Applications
Computers & Mathematics with Applications 工程技术-计算机:跨学科应用
CiteScore
5.10
自引率
10.30%
发文量
396
审稿时长
9.9 weeks
期刊介绍: Computers & Mathematics with Applications provides a medium of exchange for those engaged in fields contributing to building successful simulations for science and engineering using Partial Differential Equations (PDEs).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信