一种独立于分辨率的神经算子

IF 7.3 1区 工程技术 Q1 ENGINEERING, MULTIDISCIPLINARY
Bahador Bahmani , Somdatta Goswami , Ioannis G. Kevrekidis , Michael D. Shields
{"title":"一种独立于分辨率的神经算子","authors":"Bahador Bahmani ,&nbsp;Somdatta Goswami ,&nbsp;Ioannis G. Kevrekidis ,&nbsp;Michael D. Shields","doi":"10.1016/j.cma.2025.118113","DOIUrl":null,"url":null,"abstract":"<div><div>The Deep operator network (DeepONet) is a powerful yet simple neural operator architecture that utilizes two deep neural networks to learn mappings between infinite-dimensional function spaces. This architecture is highly flexible, allowing the evaluation of the solution field at any location within the desired domain. However, it imposes a strict constraint on the input space, requiring all input functions to be discretized at the same locations; this limits its practical applications. In this work, we introduce a general framework for operator learning from input–output data with arbitrary number and locations of sensors. This begins by introducing a resolution-independent DeepONet (RI-DeepONet), enabling it to handle input functions that are arbitrarily, but sufficiently finely, discretized. To this end, we propose two dictionary learning algorithms to adaptively learn a set of appropriate continuous basis functions, parameterized as implicit neural representations (INRs), from correlated signals defined on arbitrary point cloud data. These basis functions are then used to project arbitrary input function data as a point cloud onto an embedding space (i.e., a vector space of finite dimensions) with dimensionality equal to the dictionary size, which can be directly used by DeepONet without any architectural changes. In particular, we utilize sinusoidal representation networks (SIRENs) as trainable INR basis functions. The introduced dictionary learning algorithms are then used in a similar way to learn an appropriate dictionary of basis functions for the output function data, which defines a new neural operator architecture referred to as the <strong>R</strong>esolution <strong>I</strong>ndependent <strong>N</strong>eural <strong>O</strong>perator (RINO). In the RINO, the operator learning task simplifies to learning a mapping from the coefficients of input basis functions to the coefficients of output basis functions. We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly) sampled input and output functions during both training and inference through several numerical examples.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"444 ","pages":"Article 118113"},"PeriodicalIF":7.3000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A resolution independent neural operator\",\"authors\":\"Bahador Bahmani ,&nbsp;Somdatta Goswami ,&nbsp;Ioannis G. Kevrekidis ,&nbsp;Michael D. Shields\",\"doi\":\"10.1016/j.cma.2025.118113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The Deep operator network (DeepONet) is a powerful yet simple neural operator architecture that utilizes two deep neural networks to learn mappings between infinite-dimensional function spaces. This architecture is highly flexible, allowing the evaluation of the solution field at any location within the desired domain. However, it imposes a strict constraint on the input space, requiring all input functions to be discretized at the same locations; this limits its practical applications. In this work, we introduce a general framework for operator learning from input–output data with arbitrary number and locations of sensors. This begins by introducing a resolution-independent DeepONet (RI-DeepONet), enabling it to handle input functions that are arbitrarily, but sufficiently finely, discretized. To this end, we propose two dictionary learning algorithms to adaptively learn a set of appropriate continuous basis functions, parameterized as implicit neural representations (INRs), from correlated signals defined on arbitrary point cloud data. These basis functions are then used to project arbitrary input function data as a point cloud onto an embedding space (i.e., a vector space of finite dimensions) with dimensionality equal to the dictionary size, which can be directly used by DeepONet without any architectural changes. In particular, we utilize sinusoidal representation networks (SIRENs) as trainable INR basis functions. The introduced dictionary learning algorithms are then used in a similar way to learn an appropriate dictionary of basis functions for the output function data, which defines a new neural operator architecture referred to as the <strong>R</strong>esolution <strong>I</strong>ndependent <strong>N</strong>eural <strong>O</strong>perator (RINO). In the RINO, the operator learning task simplifies to learning a mapping from the coefficients of input basis functions to the coefficients of output basis functions. We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly) sampled input and output functions during both training and inference through several numerical examples.</div></div>\",\"PeriodicalId\":55222,\"journal\":{\"name\":\"Computer Methods in Applied Mechanics and Engineering\",\"volume\":\"444 \",\"pages\":\"Article 118113\"},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2025-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Methods in Applied Mechanics and Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0045782525003858\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Methods in Applied Mechanics and Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045782525003858","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

深度算子网络(DeepONet)是一个强大而简单的神经算子架构,它利用两个深度神经网络来学习无限维函数空间之间的映射。该体系结构非常灵活,允许在所需领域内的任何位置对解决方案字段进行评估。然而,它对输入空间施加了严格的约束,要求所有输入函数在同一位置离散;这限制了它的实际应用。在这项工作中,我们引入了一个通用框架,用于操作员从具有任意数量和位置的传感器的输入输出数据中学习。首先引入一个独立于分辨率的DeepONet (RI-DeepONet),使其能够处理任意但足够精细的离散化输入函数。为此,我们提出了两种字典学习算法,以自适应地从任意点云数据上定义的相关信号中学习一组适当的连续基函数,参数化为隐式神经表征(inr)。然后使用这些基函数将任意输入函数数据作为点云投影到维度等于字典大小的嵌入空间(即有限维的向量空间)上,DeepONet可以直接使用该嵌入空间,而无需进行任何架构更改。特别地,我们利用正弦表示网络(SIRENs)作为可训练的INR基函数。然后以类似的方式使用引入的字典学习算法来学习输出函数数据的适当基函数字典,这定义了一种新的神经算子架构,称为分辨率独立神经算子(RINO)。在RINO中,算子学习任务简化为学习输入基函数系数到输出基函数系数的映射。我们通过几个数值例子证明了RINO在训练和推理过程中处理任意(但足够丰富)采样输入和输出函数的鲁棒性和适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A resolution independent neural operator
The Deep operator network (DeepONet) is a powerful yet simple neural operator architecture that utilizes two deep neural networks to learn mappings between infinite-dimensional function spaces. This architecture is highly flexible, allowing the evaluation of the solution field at any location within the desired domain. However, it imposes a strict constraint on the input space, requiring all input functions to be discretized at the same locations; this limits its practical applications. In this work, we introduce a general framework for operator learning from input–output data with arbitrary number and locations of sensors. This begins by introducing a resolution-independent DeepONet (RI-DeepONet), enabling it to handle input functions that are arbitrarily, but sufficiently finely, discretized. To this end, we propose two dictionary learning algorithms to adaptively learn a set of appropriate continuous basis functions, parameterized as implicit neural representations (INRs), from correlated signals defined on arbitrary point cloud data. These basis functions are then used to project arbitrary input function data as a point cloud onto an embedding space (i.e., a vector space of finite dimensions) with dimensionality equal to the dictionary size, which can be directly used by DeepONet without any architectural changes. In particular, we utilize sinusoidal representation networks (SIRENs) as trainable INR basis functions. The introduced dictionary learning algorithms are then used in a similar way to learn an appropriate dictionary of basis functions for the output function data, which defines a new neural operator architecture referred to as the Resolution Independent Neural Operator (RINO). In the RINO, the operator learning task simplifies to learning a mapping from the coefficients of input basis functions to the coefficients of output basis functions. We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly) sampled input and output functions during both training and inference through several numerical examples.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
12.70
自引率
15.30%
发文量
719
审稿时长
44 days
期刊介绍: Computer Methods in Applied Mechanics and Engineering stands as a cornerstone in the realm of computational science and engineering. With a history spanning over five decades, the journal has been a key platform for disseminating papers on advanced mathematical modeling and numerical solutions. Interdisciplinary in nature, these contributions encompass mechanics, mathematics, computer science, and various scientific disciplines. The journal welcomes a broad range of computational methods addressing the simulation, analysis, and design of complex physical problems, making it a vital resource for researchers in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信