Bahador Bahmani , Somdatta Goswami , Ioannis G. Kevrekidis , Michael D. Shields
{"title":"A resolution independent neural operator","authors":"Bahador Bahmani , Somdatta Goswami , Ioannis G. Kevrekidis , Michael D. Shields","doi":"10.1016/j.cma.2025.118113","DOIUrl":null,"url":null,"abstract":"<div><div>The Deep operator network (DeepONet) is a powerful yet simple neural operator architecture that utilizes two deep neural networks to learn mappings between infinite-dimensional function spaces. This architecture is highly flexible, allowing the evaluation of the solution field at any location within the desired domain. However, it imposes a strict constraint on the input space, requiring all input functions to be discretized at the same locations; this limits its practical applications. In this work, we introduce a general framework for operator learning from input–output data with arbitrary number and locations of sensors. This begins by introducing a resolution-independent DeepONet (RI-DeepONet), enabling it to handle input functions that are arbitrarily, but sufficiently finely, discretized. To this end, we propose two dictionary learning algorithms to adaptively learn a set of appropriate continuous basis functions, parameterized as implicit neural representations (INRs), from correlated signals defined on arbitrary point cloud data. These basis functions are then used to project arbitrary input function data as a point cloud onto an embedding space (i.e., a vector space of finite dimensions) with dimensionality equal to the dictionary size, which can be directly used by DeepONet without any architectural changes. In particular, we utilize sinusoidal representation networks (SIRENs) as trainable INR basis functions. The introduced dictionary learning algorithms are then used in a similar way to learn an appropriate dictionary of basis functions for the output function data, which defines a new neural operator architecture referred to as the <strong>R</strong>esolution <strong>I</strong>ndependent <strong>N</strong>eural <strong>O</strong>perator (RINO). In the RINO, the operator learning task simplifies to learning a mapping from the coefficients of input basis functions to the coefficients of output basis functions. We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly) sampled input and output functions during both training and inference through several numerical examples.</div></div>","PeriodicalId":55222,"journal":{"name":"Computer Methods in Applied Mechanics and Engineering","volume":"444 ","pages":"Article 118113"},"PeriodicalIF":7.3000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Methods in Applied Mechanics and Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045782525003858","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
The Deep operator network (DeepONet) is a powerful yet simple neural operator architecture that utilizes two deep neural networks to learn mappings between infinite-dimensional function spaces. This architecture is highly flexible, allowing the evaluation of the solution field at any location within the desired domain. However, it imposes a strict constraint on the input space, requiring all input functions to be discretized at the same locations; this limits its practical applications. In this work, we introduce a general framework for operator learning from input–output data with arbitrary number and locations of sensors. This begins by introducing a resolution-independent DeepONet (RI-DeepONet), enabling it to handle input functions that are arbitrarily, but sufficiently finely, discretized. To this end, we propose two dictionary learning algorithms to adaptively learn a set of appropriate continuous basis functions, parameterized as implicit neural representations (INRs), from correlated signals defined on arbitrary point cloud data. These basis functions are then used to project arbitrary input function data as a point cloud onto an embedding space (i.e., a vector space of finite dimensions) with dimensionality equal to the dictionary size, which can be directly used by DeepONet without any architectural changes. In particular, we utilize sinusoidal representation networks (SIRENs) as trainable INR basis functions. The introduced dictionary learning algorithms are then used in a similar way to learn an appropriate dictionary of basis functions for the output function data, which defines a new neural operator architecture referred to as the Resolution Independent Neural Operator (RINO). In the RINO, the operator learning task simplifies to learning a mapping from the coefficients of input basis functions to the coefficients of output basis functions. We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly) sampled input and output functions during both training and inference through several numerical examples.
期刊介绍:
Computer Methods in Applied Mechanics and Engineering stands as a cornerstone in the realm of computational science and engineering. With a history spanning over five decades, the journal has been a key platform for disseminating papers on advanced mathematical modeling and numerical solutions. Interdisciplinary in nature, these contributions encompass mechanics, mathematics, computer science, and various scientific disciplines. The journal welcomes a broad range of computational methods addressing the simulation, analysis, and design of complex physical problems, making it a vital resource for researchers in the field.