A novel probabilistic transfer learning strategy for polynomial regression

IF 16.4 1区 化学 Q1 CHEMISTRY, MULTIDISCIPLINARY
Wyatt Bridgman, Uma Balakrishnan, Reese E. Jones, Jiefu Chen, Xuqing Wu, Cosmin Safta, Yueqin Huang, Mohammad Khalil
{"title":"A novel probabilistic transfer learning strategy for polynomial regression","authors":"Wyatt Bridgman, Uma Balakrishnan, Reese E. Jones, Jiefu Chen, Xuqing Wu, Cosmin Safta, Yueqin Huang, Mohammad Khalil","doi":"10.1615/int.j.uncertaintyquantification.2024052051","DOIUrl":null,"url":null,"abstract":"In the field of surrogate modeling and, more recently, with machine learning, transfer learning methodologies have been proposed in which knowledge from a source task is transferred to a target task where sparse and/or noisy data result in an ill-posed calibration problem. Such sparsity can result from prohibitively expensive forward model simulations or simply lack of data from experiments. Transfer learning attempts to improve target model calibration by leveraging similarities between the source and target tasks.This often takes the form of parameter-based transfer, which exploits correlations between the parameters defining the source and target models in order to regularize the target task. The majority of these approaches are deterministic and do not account for uncertainty in the model parameters. In this work, we propose a novel probabilistic transfer learning methodology which transfers knowledge from the posterior distribution of source to the target Bayesian inverse problem using an approach inspired by data assimilation.While the methodology is presented generally, it is subsequently investigated in the context of polynomial regression and, more specifically, Polynomial Chaos Expansions which result in Gaussian posterior distributions in the case of iid Gaussian observation noise and conjugate Gaussian prior distributions. The strategy is evaluated using numerical investigations and applied to an engineering problem from the oil and gas industry.","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1615/int.j.uncertaintyquantification.2024052051","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

In the field of surrogate modeling and, more recently, with machine learning, transfer learning methodologies have been proposed in which knowledge from a source task is transferred to a target task where sparse and/or noisy data result in an ill-posed calibration problem. Such sparsity can result from prohibitively expensive forward model simulations or simply lack of data from experiments. Transfer learning attempts to improve target model calibration by leveraging similarities between the source and target tasks.This often takes the form of parameter-based transfer, which exploits correlations between the parameters defining the source and target models in order to regularize the target task. The majority of these approaches are deterministic and do not account for uncertainty in the model parameters. In this work, we propose a novel probabilistic transfer learning methodology which transfers knowledge from the posterior distribution of source to the target Bayesian inverse problem using an approach inspired by data assimilation.While the methodology is presented generally, it is subsequently investigated in the context of polynomial regression and, more specifically, Polynomial Chaos Expansions which result in Gaussian posterior distributions in the case of iid Gaussian observation noise and conjugate Gaussian prior distributions. The strategy is evaluated using numerical investigations and applied to an engineering problem from the oil and gas industry.
用于多项式回归的新型概率转移学习策略
在代用建模领域,以及最近的机器学习领域,人们提出了迁移学习方法,将源任务中的知识迁移到目标任务中,在目标任务中,稀疏和/或嘈杂的数据会导致难以解决的校准问题。这种稀疏性可能源于昂贵的前向模型模拟,也可能仅仅是缺乏实验数据。迁移学习试图利用源任务和目标任务之间的相似性来改进目标模型的校准。这通常采取基于参数的迁移形式,即利用定义源模型和目标模型的参数之间的相关性来规范目标任务。这些方法大多是确定性的,不考虑模型参数的不确定性。在这项工作中,我们提出了一种新颖的概率转移学习方法,利用一种受数据同化启发的方法,将源后验分布中的知识转移到目标贝叶斯逆问题中。在对该方法进行一般性介绍的同时,我们随后在多项式回归的背景下对其进行了研究,更具体地说,多项式混沌展开(Polynomial Chaos Expansions)会在同位高斯观测噪声和共轭高斯先验分布的情况下产生高斯后验分布。通过数值研究对该策略进行了评估,并将其应用于石油和天然气行业的一个工程问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Accounts of Chemical Research
Accounts of Chemical Research 化学-化学综合
CiteScore
31.40
自引率
1.10%
发文量
312
审稿时长
2 months
期刊介绍: Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance. Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信