{"title":"SIGEST","authors":"The Editors","doi":"10.1137/24n975943","DOIUrl":null,"url":null,"abstract":"SIAM Review, Volume 66, Issue 3, Page 533-533, May 2024. <br/> The SIGEST article in this issue is “Operator Learning Using Random Features: A Tool for Scientific Computing,” by Nicholas H. Nelsen and Andrew M. Stuart. This work considers the problem of operator learning in infinite-dimensional Banach spaces through the use of random features. The driving application is the approximation of solution operators to partial differential equations (PDEs), here foremost time-dependent problems, that are naturally posed in an infinite-dimensional function space. Typically here, in contrast to the mainstream big data regimes of machine learning applications such as computer vision, high resolution data coming from physical experiments or from computationally expensive simulations of such differential equations is usually small. Fast and approximate surrogates built from such data can be advantageous in building forward models for inverse problems or for doing uncertainty quantification, for instance. Showing how this can be done in infinite dimensions gives rise to approximators which are at the outset resolution and discretization invariant, allowing training on one resolution and deploying on another. At the heart of this work is the function-valued random features methodology that the authors extended from the finite setting of the classical random features approach. Here, the nonlinear operator is approximated by a linear combination of random operators which turn out to be a low-rank approximation and whose computation amounts to a convex, quadratic optimisation problem that is efficiently solvable and for which convergence guarantees can be derived. The methodology is then concretely applied to two concrete PDE examples: Burgers' equations and Darcy flow, demonstrating the applicability of the function-valued random features method, its scalability, discretization invariance, and transferability. The original 2021 article, which appeared in SIAM's Journal on Scientific Computing, has attracted considerable attention. In preparing this SIGEST version, the authors have made numerous modifications and revisions. These include expanding the introductory section and the concluding remarks, condensing the technical content and making it more accessible, and adding a link to an open access GitHub repository that contains all data and code used to produce the results in the paper.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"64 1","pages":""},"PeriodicalIF":10.8000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SIGEST\",\"authors\":\"The Editors\",\"doi\":\"10.1137/24n975943\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Review, Volume 66, Issue 3, Page 533-533, May 2024. <br/> The SIGEST article in this issue is “Operator Learning Using Random Features: A Tool for Scientific Computing,” by Nicholas H. Nelsen and Andrew M. Stuart. This work considers the problem of operator learning in infinite-dimensional Banach spaces through the use of random features. The driving application is the approximation of solution operators to partial differential equations (PDEs), here foremost time-dependent problems, that are naturally posed in an infinite-dimensional function space. Typically here, in contrast to the mainstream big data regimes of machine learning applications such as computer vision, high resolution data coming from physical experiments or from computationally expensive simulations of such differential equations is usually small. Fast and approximate surrogates built from such data can be advantageous in building forward models for inverse problems or for doing uncertainty quantification, for instance. Showing how this can be done in infinite dimensions gives rise to approximators which are at the outset resolution and discretization invariant, allowing training on one resolution and deploying on another. At the heart of this work is the function-valued random features methodology that the authors extended from the finite setting of the classical random features approach. Here, the nonlinear operator is approximated by a linear combination of random operators which turn out to be a low-rank approximation and whose computation amounts to a convex, quadratic optimisation problem that is efficiently solvable and for which convergence guarantees can be derived. The methodology is then concretely applied to two concrete PDE examples: Burgers' equations and Darcy flow, demonstrating the applicability of the function-valued random features method, its scalability, discretization invariance, and transferability. The original 2021 article, which appeared in SIAM's Journal on Scientific Computing, has attracted considerable attention. In preparing this SIGEST version, the authors have made numerous modifications and revisions. These include expanding the introductory section and the concluding remarks, condensing the technical content and making it more accessible, and adding a link to an open access GitHub repository that contains all data and code used to produce the results in the paper.\",\"PeriodicalId\":49525,\"journal\":{\"name\":\"SIAM Review\",\"volume\":\"64 1\",\"pages\":\"\"},\"PeriodicalIF\":10.8000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIAM Review\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/24n975943\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Review","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/24n975943","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
摘要
SIAM Review》,第 66 卷第 3 期,第 533-533 页,2024 年 5 月。 本期的 SIGEST 文章是 "Operator Learning Using Random Features:一种科学计算工具",作者 Nicholas H. Nelsen 和 Andrew M. Stuart。这项研究考虑了通过使用随机特征在无限维巴拿赫空间中进行算子学习的问题。其主要应用是近似偏微分方程(PDEs)的解算子,在这里最重要的是时间相关问题,这些问题自然是在无穷维函数空间中提出的。通常情况下,与计算机视觉等机器学习应用的主流大数据环境不同,来自物理实验或计算成本高昂的此类微分方程模拟的高分辨率数据通常较少。从这些数据中建立快速近似的代用数据,在为逆问题建立前向模型或进行不确定性量化等方面具有优势。通过展示如何在无限维度上实现这一点,我们可以得到从一开始就与分辨率和离散度无关的近似值,从而可以在一种分辨率上进行训练,并在另一种分辨率上进行部署。这项工作的核心是作者从经典随机特征方法的有限设置中扩展出来的函数值随机特征方法。在这里,非线性算子由随机算子的线性组合近似,而随机算子的线性组合是一种低阶近似,其计算相当于一个凸二次优化问题,可高效求解,并可得出收敛保证。然后,我们将这一方法具体应用于两个具体的 PDE 例子:布尔格斯方程和达西流,展示了函数值随机特征方法的适用性、可扩展性、离散不变性和可转移性。最初的 2021 年文章发表在 SIAM 的《科学计算期刊》上,引起了广泛关注。在编写此 SIGEST 版本时,作者进行了大量修改和修订。这些修改和修订包括扩充引言部分和结束语,浓缩技术内容并使其更易于理解,以及添加指向开放访问 GitHub 存储库的链接,该存储库包含用于生成论文结果的所有数据和代码。
SIAM Review, Volume 66, Issue 3, Page 533-533, May 2024. The SIGEST article in this issue is “Operator Learning Using Random Features: A Tool for Scientific Computing,” by Nicholas H. Nelsen and Andrew M. Stuart. This work considers the problem of operator learning in infinite-dimensional Banach spaces through the use of random features. The driving application is the approximation of solution operators to partial differential equations (PDEs), here foremost time-dependent problems, that are naturally posed in an infinite-dimensional function space. Typically here, in contrast to the mainstream big data regimes of machine learning applications such as computer vision, high resolution data coming from physical experiments or from computationally expensive simulations of such differential equations is usually small. Fast and approximate surrogates built from such data can be advantageous in building forward models for inverse problems or for doing uncertainty quantification, for instance. Showing how this can be done in infinite dimensions gives rise to approximators which are at the outset resolution and discretization invariant, allowing training on one resolution and deploying on another. At the heart of this work is the function-valued random features methodology that the authors extended from the finite setting of the classical random features approach. Here, the nonlinear operator is approximated by a linear combination of random operators which turn out to be a low-rank approximation and whose computation amounts to a convex, quadratic optimisation problem that is efficiently solvable and for which convergence guarantees can be derived. The methodology is then concretely applied to two concrete PDE examples: Burgers' equations and Darcy flow, demonstrating the applicability of the function-valued random features method, its scalability, discretization invariance, and transferability. The original 2021 article, which appeared in SIAM's Journal on Scientific Computing, has attracted considerable attention. In preparing this SIGEST version, the authors have made numerous modifications and revisions. These include expanding the introductory section and the concluding remarks, condensing the technical content and making it more accessible, and adding a link to an open access GitHub repository that contains all data and code used to produce the results in the paper.
期刊介绍:
Survey and Review feature papers that provide an integrative and current viewpoint on important topics in applied or computational mathematics and scientific computing. These papers aim to offer a comprehensive perspective on the subject matter.
Research Spotlights publish concise research papers in applied and computational mathematics that are of interest to a wide range of readers in SIAM Review. The papers in this section present innovative ideas that are clearly explained and motivated. They stand out from regular publications in specific SIAM journals due to their accessibility and potential for widespread and long-lasting influence.