Generalization error guaranteed auto-encoder-based nonlinear model reduction for operator learning

IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED
Hao Liu , Biraj Dahal , Rongjie Lai , Wenjing Liao
{"title":"Generalization error guaranteed auto-encoder-based nonlinear model reduction for operator learning","authors":"Hao Liu ,&nbsp;Biraj Dahal ,&nbsp;Rongjie Lai ,&nbsp;Wenjing Liao","doi":"10.1016/j.acha.2024.101717","DOIUrl":null,"url":null,"abstract":"<div><div>Many physical processes in science and engineering are naturally represented by operators between infinite-dimensional function spaces. The problem of operator learning, in this context, seeks to extract these physical processes from empirical data, which is challenging due to the infinite or high dimensionality of data. An integral component in addressing this challenge is model reduction, which reduces both the data dimensionality and problem size. In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet). AENet first learns the latent variables of the input data and then learns the transformation from these latent variables to corresponding output data. Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations. Furthermore, we establish a mathematical and statistical estimation theory that analyzes the generalization error of AENet. Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process, while also demonstrating the robustness of AENet to noise.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101717"},"PeriodicalIF":2.6000,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied and Computational Harmonic Analysis","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1063520324000940","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

Many physical processes in science and engineering are naturally represented by operators between infinite-dimensional function spaces. The problem of operator learning, in this context, seeks to extract these physical processes from empirical data, which is challenging due to the infinite or high dimensionality of data. An integral component in addressing this challenge is model reduction, which reduces both the data dimensionality and problem size. In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet). AENet first learns the latent variables of the input data and then learns the transformation from these latent variables to corresponding output data. Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations. Furthermore, we establish a mathematical and statistical estimation theory that analyzes the generalization error of AENet. Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process, while also demonstrating the robustness of AENet to noise.
基于自动编码器的泛化误差保证非线性模型还原用于算子学习
科学和工程学中的许多物理过程自然是由无限维函数空间之间的算子表示的。在这种情况下,算子学习问题旨在从经验数据中提取这些物理过程,而由于数据的无限维或高维性,这一问题具有挑战性。应对这一挑战的一个不可或缺的组成部分是模型还原,它可以降低数据维度和问题规模。本文通过研究基于自动编码器的神经网络(AENet),利用低维非线性结构进行模型缩减。AENet 首先学习输入数据的潜在变量,然后学习从这些潜在变量到相应输出数据的转换。我们的数值实验验证了 AENet 准确学习非线性偏微分方程解算子的能力。此外,我们还建立了一套数理统计估计理论,分析了 AENet 的泛化误差。我们的理论框架表明,训练 AENet 的样本复杂度与建模过程的内在维度密切相关,同时也证明了 AENet 对噪声的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied and Computational Harmonic Analysis
Applied and Computational Harmonic Analysis 物理-物理:数学物理
CiteScore
5.40
自引率
4.00%
发文量
67
审稿时长
22.9 weeks
期刊介绍: Applied and Computational Harmonic Analysis (ACHA) is an interdisciplinary journal that publishes high-quality papers in all areas of mathematical sciences related to the applied and computational aspects of harmonic analysis, with special emphasis on innovative theoretical development, methods, and algorithms, for information processing, manipulation, understanding, and so forth. The objectives of the journal are to chronicle the important publications in the rapidly growing field of data representation and analysis, to stimulate research in relevant interdisciplinary areas, and to provide a common link among mathematical, physical, and life scientists, as well as engineers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信