域自适应中带噪声样本最小二乘逼近的误差保证

Felix Bartel
{"title":"域自适应中带噪声样本最小二乘逼近的误差保证","authors":"Felix Bartel","doi":"10.5802/smai-jcm.96","DOIUrl":null,"url":null,"abstract":"Given $n$ samples of a function $f\\colon D\\to\\mathbb C$ in random points drawn with respect to a measure $\\varrho_S$ we develop theoretical analysis of the $L_2(D, \\varrho_T)$-approximation error. For a parituclar choice of $\\varrho_S$ depending on $\\varrho_T$, it is known that the weighted least squares method from finite dimensional function spaces $V_m$, $\\dim(V_m) = m<\\infty$ has the same error as the best approximation in $V_m$ up to a multiplicative constant when given exact samples with logarithmic oversampling. If the source measure $\\varrho_S$ and the target measure $\\varrho_T$ differ we are in the domain adaptation setting, a subfield of transfer learning. We model the resulting deterioration of the error in our bounds. Further, for noisy samples, our bounds describe the bias-variance trade off depending on the dimension $m$ of the approximation space $V_m$. All results hold with high probability. For demonstration, we consider functions defined on the $d$-dimensional cube given in unifom random samples. We analyze polynomials, the half-period cosine, and a bounded orthonormal basis of the non-periodic Sobolev space $H_{\\mathrm{mix}}^2$. Overcoming numerical issues of this $H_{\\text{mix}}^2$ basis, this gives a novel stable approximation method with quadratic error decay. Numerical experiments indicate the applicability of our results.","PeriodicalId":376888,"journal":{"name":"The SMAI journal of computational mathematics","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Error Guarantees for Least Squares Approximation with Noisy Samples in Domain Adaptation\",\"authors\":\"Felix Bartel\",\"doi\":\"10.5802/smai-jcm.96\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Given $n$ samples of a function $f\\\\colon D\\\\to\\\\mathbb C$ in random points drawn with respect to a measure $\\\\varrho_S$ we develop theoretical analysis of the $L_2(D, \\\\varrho_T)$-approximation error. For a parituclar choice of $\\\\varrho_S$ depending on $\\\\varrho_T$, it is known that the weighted least squares method from finite dimensional function spaces $V_m$, $\\\\dim(V_m) = m<\\\\infty$ has the same error as the best approximation in $V_m$ up to a multiplicative constant when given exact samples with logarithmic oversampling. If the source measure $\\\\varrho_S$ and the target measure $\\\\varrho_T$ differ we are in the domain adaptation setting, a subfield of transfer learning. We model the resulting deterioration of the error in our bounds. Further, for noisy samples, our bounds describe the bias-variance trade off depending on the dimension $m$ of the approximation space $V_m$. All results hold with high probability. For demonstration, we consider functions defined on the $d$-dimensional cube given in unifom random samples. We analyze polynomials, the half-period cosine, and a bounded orthonormal basis of the non-periodic Sobolev space $H_{\\\\mathrm{mix}}^2$. Overcoming numerical issues of this $H_{\\\\text{mix}}^2$ basis, this gives a novel stable approximation method with quadratic error decay. Numerical experiments indicate the applicability of our results.\",\"PeriodicalId\":376888,\"journal\":{\"name\":\"The SMAI journal of computational mathematics\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The SMAI journal of computational mathematics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5802/smai-jcm.96\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The SMAI journal of computational mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5802/smai-jcm.96","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

给定$n$关于测量$\varrho_S$的函数$f\colon D\to\mathbb C$随机点的样本,我们对$L_2(D, \varrho_T)$ -近似误差进行了理论分析。对于依赖于$\varrho_T$的$\varrho_S$的特定选择,众所周知,有限维函数空间$V_m$, $\dim(V_m) = m<\infty$的加权最小二乘法在给定具有对数过采样的精确样本时具有与$V_m$中的最佳近似值相同的误差,直至相乘常数。如果源测量$\varrho_S$和目标测量$\varrho_T$不同,我们就处于领域适应设置中,这是迁移学习的一个子领域。我们在我们的范围内对误差的恶化进行建模。此外,对于有噪声的样本,我们的边界描述了偏差-方差权衡,这取决于近似空间$V_m$的维度$m$。所有结果都有高概率成立。为了演示,我们考虑在均匀随机样本中给出的$d$ -维立方体上定义的函数。我们分析多项式,半周期余弦,和非周期Sobolev空间$H_{\mathrm{mix}}^2$的有界标准正交基。克服了$H_{\text{mix}}^2$基的数值问题,给出了一种具有二次误差衰减的稳定近似方法。数值实验表明了本文结果的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Error Guarantees for Least Squares Approximation with Noisy Samples in Domain Adaptation
Given $n$ samples of a function $f\colon D\to\mathbb C$ in random points drawn with respect to a measure $\varrho_S$ we develop theoretical analysis of the $L_2(D, \varrho_T)$-approximation error. For a parituclar choice of $\varrho_S$ depending on $\varrho_T$, it is known that the weighted least squares method from finite dimensional function spaces $V_m$, $\dim(V_m) = m<\infty$ has the same error as the best approximation in $V_m$ up to a multiplicative constant when given exact samples with logarithmic oversampling. If the source measure $\varrho_S$ and the target measure $\varrho_T$ differ we are in the domain adaptation setting, a subfield of transfer learning. We model the resulting deterioration of the error in our bounds. Further, for noisy samples, our bounds describe the bias-variance trade off depending on the dimension $m$ of the approximation space $V_m$. All results hold with high probability. For demonstration, we consider functions defined on the $d$-dimensional cube given in unifom random samples. We analyze polynomials, the half-period cosine, and a bounded orthonormal basis of the non-periodic Sobolev space $H_{\mathrm{mix}}^2$. Overcoming numerical issues of this $H_{\text{mix}}^2$ basis, this gives a novel stable approximation method with quadratic error decay. Numerical experiments indicate the applicability of our results.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信