关于稀疏回归、Lp 规则化和自动模型发现

IF 2.7 3区 工程技术 Q1 ENGINEERING, MULTIDISCIPLINARY
Jeremy A. McCulloch, Skyler R. St. Pierre, Kevin Linka, Ellen Kuhl
{"title":"关于稀疏回归、Lp 规则化和自动模型发现","authors":"Jeremy A. McCulloch,&nbsp;Skyler R. St. Pierre,&nbsp;Kevin Linka,&nbsp;Ellen Kuhl","doi":"10.1002/nme.7481","DOIUrl":null,"url":null,"abstract":"<p>Sparse regression and feature extraction are the cornerstones of knowledge discovery from massive data. Their goal is to discover interpretable and predictive models that provide simple relationships among scientific variables. While the statistical tools for model discovery are well established in the context of linear regression, their generalization to nonlinear regression in material modeling is highly problem-specific and insufficiently understood. Here we explore the potential of neural networks for automatic model discovery and induce sparsity by a hybrid approach that combines two strategies: regularization and physical constraints. We integrate the concept of <i>L</i><sub><i>p</i></sub> regularization for subset selection with constitutive neural networks that leverage our domain knowledge in kinematics and thermodynamics. We train our networks with both, synthetic and real data, and perform several thousand discovery runs to infer common guidelines and trends: <i>L</i><sub>2</sub> regularization or ridge regression is unsuitable for model discovery; <i>L</i><sub>1</sub> regularization or lasso promotes sparsity, but induces strong bias that may aggressively change the results; only <i>L</i><sub>0</sub> regularization allows us to transparently fine-tune the trade-off between interpretability and predictability, simplicity and accuracy, and bias and variance. With these insights, we demonstrate that <i>L</i><sub><i>p</i></sub> regularized constitutive neural networks can simultaneously discover both, interpretable models and physically meaningful parameters. We anticipate that our findings will generalize to alternative discovery techniques such as sparse and symbolic regression, and to other domains such as biology, chemistry, or medicine. Our ability to automatically discover material models from data could have tremendous applications in generative material design and open new opportunities to manipulate matter, alter properties of existing materials, and discover new materials with user-defined properties.</p>","PeriodicalId":13699,"journal":{"name":"International Journal for Numerical Methods in Engineering","volume":"125 14","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/nme.7481","citationCount":"0","resultStr":"{\"title\":\"On sparse regression, Lp-regularization, and automated model discovery\",\"authors\":\"Jeremy A. McCulloch,&nbsp;Skyler R. St. Pierre,&nbsp;Kevin Linka,&nbsp;Ellen Kuhl\",\"doi\":\"10.1002/nme.7481\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Sparse regression and feature extraction are the cornerstones of knowledge discovery from massive data. Their goal is to discover interpretable and predictive models that provide simple relationships among scientific variables. While the statistical tools for model discovery are well established in the context of linear regression, their generalization to nonlinear regression in material modeling is highly problem-specific and insufficiently understood. Here we explore the potential of neural networks for automatic model discovery and induce sparsity by a hybrid approach that combines two strategies: regularization and physical constraints. We integrate the concept of <i>L</i><sub><i>p</i></sub> regularization for subset selection with constitutive neural networks that leverage our domain knowledge in kinematics and thermodynamics. We train our networks with both, synthetic and real data, and perform several thousand discovery runs to infer common guidelines and trends: <i>L</i><sub>2</sub> regularization or ridge regression is unsuitable for model discovery; <i>L</i><sub>1</sub> regularization or lasso promotes sparsity, but induces strong bias that may aggressively change the results; only <i>L</i><sub>0</sub> regularization allows us to transparently fine-tune the trade-off between interpretability and predictability, simplicity and accuracy, and bias and variance. With these insights, we demonstrate that <i>L</i><sub><i>p</i></sub> regularized constitutive neural networks can simultaneously discover both, interpretable models and physically meaningful parameters. We anticipate that our findings will generalize to alternative discovery techniques such as sparse and symbolic regression, and to other domains such as biology, chemistry, or medicine. Our ability to automatically discover material models from data could have tremendous applications in generative material design and open new opportunities to manipulate matter, alter properties of existing materials, and discover new materials with user-defined properties.</p>\",\"PeriodicalId\":13699,\"journal\":{\"name\":\"International Journal for Numerical Methods in Engineering\",\"volume\":\"125 14\",\"pages\":\"\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/nme.7481\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal for Numerical Methods in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/nme.7481\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal for Numerical Methods in Engineering","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/nme.7481","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

稀疏回归和特征提取是从海量数据中发现知识的基石。它们的目标是发现可解释的预测模型,提供科学变量之间的简单关系。虽然发现模型的统计工具在线性回归的背景下已经得到了很好的应用,但它们在材料建模中对非线性回归的普适性却与具体问题密切相关,而且还没有得到充分的理解。在此,我们探索了神经网络在自动发现模型方面的潜力,并通过结合两种策略(正则化和物理约束)的混合方法诱导稀疏性。我们将用于子集选择的 Lp 正则化概念与利用运动学和热力学领域知识的构成型神经网络相结合。我们利用合成数据和真实数据训练我们的网络,并执行数千次发现运行,以推断出共同的准则和趋势:L2 正则化或脊回归不适合模型发现;L1 正则化或 lasso 可促进稀疏性,但会引起强烈的偏差,从而可能严重改变结果;只有 L0 正则化能让我们在可解释性和可预测性、简单性和准确性以及偏差和方差之间进行透明的微调。基于这些见解,我们证明了 Lp 正则化构成神经网络可以同时发现可解释的模型和有物理意义的参数。我们预计,我们的发现将推广到稀疏回归和符号回归等其他发现技术,以及生物学、化学或医学等其他领域。我们从数据中自动发现材料模型的能力将在生成材料设计中产生巨大的应用,并为操纵物质、改变现有材料的特性以及发现具有用户定义特性的新材料带来新的机遇。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

On sparse regression, Lp-regularization, and automated model discovery

On sparse regression, Lp-regularization, and automated model discovery

Sparse regression and feature extraction are the cornerstones of knowledge discovery from massive data. Their goal is to discover interpretable and predictive models that provide simple relationships among scientific variables. While the statistical tools for model discovery are well established in the context of linear regression, their generalization to nonlinear regression in material modeling is highly problem-specific and insufficiently understood. Here we explore the potential of neural networks for automatic model discovery and induce sparsity by a hybrid approach that combines two strategies: regularization and physical constraints. We integrate the concept of Lp regularization for subset selection with constitutive neural networks that leverage our domain knowledge in kinematics and thermodynamics. We train our networks with both, synthetic and real data, and perform several thousand discovery runs to infer common guidelines and trends: L2 regularization or ridge regression is unsuitable for model discovery; L1 regularization or lasso promotes sparsity, but induces strong bias that may aggressively change the results; only L0 regularization allows us to transparently fine-tune the trade-off between interpretability and predictability, simplicity and accuracy, and bias and variance. With these insights, we demonstrate that Lp regularized constitutive neural networks can simultaneously discover both, interpretable models and physically meaningful parameters. We anticipate that our findings will generalize to alternative discovery techniques such as sparse and symbolic regression, and to other domains such as biology, chemistry, or medicine. Our ability to automatically discover material models from data could have tremendous applications in generative material design and open new opportunities to manipulate matter, alter properties of existing materials, and discover new materials with user-defined properties.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
6.90%
发文量
276
审稿时长
5.3 months
期刊介绍: The International Journal for Numerical Methods in Engineering publishes original papers describing significant, novel developments in numerical methods that are applicable to engineering problems. The Journal is known for welcoming contributions in a wide range of areas in computational engineering, including computational issues in model reduction, uncertainty quantification, verification and validation, inverse analysis and stochastic methods, optimisation, element technology, solution techniques and parallel computing, damage and fracture, mechanics at micro and nano-scales, low-speed fluid dynamics, fluid-structure interaction, electromagnetics, coupled diffusion phenomena, and error estimation and mesh generation. It is emphasized that this is by no means an exhaustive list, and particularly papers on multi-scale, multi-physics or multi-disciplinary problems, and on new, emerging topics are welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信