Error-in-variables modelling for operator learning

Ravi G. Patel, Indu Manickam, Myoungkyu Lee, Mamikon A. Gulian
{"title":"Error-in-variables modelling for operator learning","authors":"Ravi G. Patel, Indu Manickam, Myoungkyu Lee, Mamikon A. Gulian","doi":"10.48550/arXiv.2204.10909","DOIUrl":null,"url":null,"abstract":"Deep operator learning has emerged as a promising tool for reduced-order modelling and PDE model discovery. Leveraging the expressive power of deep neural networks, especially in high dimensions, such methods learn the mapping between functional state variables. While proposed methods have assumed noise only in the dependent variables, experimental and numerical data for operator learning typically exhibit noise in the independent variables as well, since both variables represent signals that are subject to measurement error. In regression on scalar data, failure to account for noisy independent variables can lead to biased parameter estimates. With noisy independent variables, linear models fitted via ordinary least squares (OLS) will show attenuation bias, wherein the slope will be underestimated. In this work, we derive an analogue of attenuation bias for linear operator regression with white noise in both the independent and dependent variables. In the nonlinear setting, we computationally demonstrate underprediction of the action of the Burgers operator in the presence of noise in the independent variable. We propose error-in-variables (EiV) models for two operator regression methods, MOR-Physics and DeepONet, and demonstrate that these new models reduce bias in the presence of noisy independent variables for a variety of operator learning problems. Considering the Burgers operator in 1D and 2D, we demonstrate that EiV operator learning robustly recovers operators in high-noise regimes that defeat OLS operator learning. We also introduce an EiV model for time-evolving PDE discovery and show that OLS and EiV perform similarly in learning the Kuramoto-Sivashinsky evolution operator from corrupted data, suggesting that the effect of bias in OLS operator learning depends on the regularity of the target operator.","PeriodicalId":189279,"journal":{"name":"Mathematical and Scientific Machine Learning","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematical and Scientific Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2204.10909","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Deep operator learning has emerged as a promising tool for reduced-order modelling and PDE model discovery. Leveraging the expressive power of deep neural networks, especially in high dimensions, such methods learn the mapping between functional state variables. While proposed methods have assumed noise only in the dependent variables, experimental and numerical data for operator learning typically exhibit noise in the independent variables as well, since both variables represent signals that are subject to measurement error. In regression on scalar data, failure to account for noisy independent variables can lead to biased parameter estimates. With noisy independent variables, linear models fitted via ordinary least squares (OLS) will show attenuation bias, wherein the slope will be underestimated. In this work, we derive an analogue of attenuation bias for linear operator regression with white noise in both the independent and dependent variables. In the nonlinear setting, we computationally demonstrate underprediction of the action of the Burgers operator in the presence of noise in the independent variable. We propose error-in-variables (EiV) models for two operator regression methods, MOR-Physics and DeepONet, and demonstrate that these new models reduce bias in the presence of noisy independent variables for a variety of operator learning problems. Considering the Burgers operator in 1D and 2D, we demonstrate that EiV operator learning robustly recovers operators in high-noise regimes that defeat OLS operator learning. We also introduce an EiV model for time-evolving PDE discovery and show that OLS and EiV perform similarly in learning the Kuramoto-Sivashinsky evolution operator from corrupted data, suggesting that the effect of bias in OLS operator learning depends on the regularity of the target operator.
算子学习的变量误差建模
深度算子学习已经成为一种很有前途的工具,用于低阶建模和PDE模型发现。利用深度神经网络的表达能力,特别是在高维,这种方法学习功能状态变量之间的映射。虽然所提出的方法只假设因变量中存在噪声,但算子学习的实验和数值数据通常也会在自变量中显示噪声,因为这两个变量都表示受测量误差影响的信号。在对标量数据的回归中,不能考虑有噪声的自变量可能导致参数估计有偏。对于有噪声的自变量,通过普通最小二乘(OLS)拟合的线性模型将显示衰减偏差,其中斜率将被低估。在这项工作中,我们推导了在自变量和因变量中具有白噪声的线性算子回归的衰减偏差的模拟。在非线性设置中,我们通过计算证明了自变量中存在噪声时Burgers算子作用的预估不足。我们提出了两种算子回归方法(moor - physics和DeepONet)的变量误差(EiV)模型,并证明了这些新模型在存在噪声自变量的情况下减少了各种算子学习问题的偏差。考虑到1D和2D中的Burgers算子,我们证明了EiV算子学习可以鲁棒地恢复高噪声条件下的算子,从而击败OLS算子学习。我们还引入了一个用于时间进化PDE发现的EiV模型,并表明OLS和EiV在从损坏数据中学习Kuramoto-Sivashinsky进化算子方面表现相似,这表明OLS算子学习中的偏差影响取决于目标算子的规律性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信