约束系统神经网络建模的两阶段训练方法

IF 3.4 3区 经济学 Q1 ECONOMICS
C. Coelho, M. Fernanda P. Costa, L.L. Ferrás
{"title":"约束系统神经网络建模的两阶段训练方法","authors":"C. Coelho,&nbsp;M. Fernanda P. Costa,&nbsp;L.L. Ferrás","doi":"10.1002/for.3270","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Real-world systems are often formulated as constrained optimization problems. Techniques to incorporate constraints into neural networks (NN), such as neural ordinary differential equations (Neural ODEs), have been used. However, these introduce hyperparameters that require manual tuning through trial and error, raising doubts about the successful incorporation of constraints into the generated model. This paper describes in detail the two-stage training method for Neural ODEs, a simple, effective, and penalty parameter-free approach to model constrained systems. In this approach, the constrained optimization problem is rewritten as two optimization subproblems that are solved in two stages. The first stage aims at finding feasible NN parameters by minimizing a measure of constraints violation. The second stage aims to find the optimal NN parameters by minimizing the loss function while keeping inside the feasible region. We experimentally demonstrate that our method produces models that satisfy the constraints and also improves their predictive performance, thus ensuring compliance with critical system properties and also contributing to reducing data quantity requirements. Furthermore, we show that the proposed method improves the convergence to an optimal solution and improves the explainability of Neural ODE models. Our proposed two-stage training method can be used with any NN architectures.</p>\n </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 5","pages":"1785-1805"},"PeriodicalIF":3.4000,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Two-Stage Training Method for Modeling Constrained Systems With Neural Networks\",\"authors\":\"C. Coelho,&nbsp;M. Fernanda P. Costa,&nbsp;L.L. Ferrás\",\"doi\":\"10.1002/for.3270\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Real-world systems are often formulated as constrained optimization problems. Techniques to incorporate constraints into neural networks (NN), such as neural ordinary differential equations (Neural ODEs), have been used. However, these introduce hyperparameters that require manual tuning through trial and error, raising doubts about the successful incorporation of constraints into the generated model. This paper describes in detail the two-stage training method for Neural ODEs, a simple, effective, and penalty parameter-free approach to model constrained systems. In this approach, the constrained optimization problem is rewritten as two optimization subproblems that are solved in two stages. The first stage aims at finding feasible NN parameters by minimizing a measure of constraints violation. The second stage aims to find the optimal NN parameters by minimizing the loss function while keeping inside the feasible region. We experimentally demonstrate that our method produces models that satisfy the constraints and also improves their predictive performance, thus ensuring compliance with critical system properties and also contributing to reducing data quantity requirements. Furthermore, we show that the proposed method improves the convergence to an optimal solution and improves the explainability of Neural ODE models. Our proposed two-stage training method can be used with any NN architectures.</p>\\n </div>\",\"PeriodicalId\":47835,\"journal\":{\"name\":\"Journal of Forecasting\",\"volume\":\"44 5\",\"pages\":\"1785-1805\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-03-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Forecasting\",\"FirstCategoryId\":\"96\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/for.3270\",\"RegionNum\":3,\"RegionCategory\":\"经济学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ECONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Forecasting","FirstCategoryId":"96","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/for.3270","RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

摘要

现实世界的系统通常被表述为约束优化问题。将约束纳入神经网络(NN)的技术,如神经常微分方程(neural ode),已经被使用。然而,这些引入的超参数需要通过试验和错误进行手动调优,从而对成功地将约束合并到生成的模型中提出了质疑。本文详细介绍了一种简单、有效、无惩罚参数的模型约束系统两阶段训练方法。该方法将约束优化问题改写为两个优化子问题,分两个阶段求解。第一阶段的目标是通过最小化约束违反度量来找到可行的神经网络参数。第二阶段的目标是通过最小化损失函数来找到最优的神经网络参数,同时保持在可行区域内。我们通过实验证明,我们的方法产生的模型满足约束条件,并提高了它们的预测性能,从而确保符合关键系统属性,并有助于减少数据量需求。此外,我们还证明了该方法提高了收敛到最优解的速度,并提高了神经ODE模型的可解释性。我们提出的两阶段训练方法可以用于任何神经网络体系结构。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Two-Stage Training Method for Modeling Constrained Systems With Neural Networks

Real-world systems are often formulated as constrained optimization problems. Techniques to incorporate constraints into neural networks (NN), such as neural ordinary differential equations (Neural ODEs), have been used. However, these introduce hyperparameters that require manual tuning through trial and error, raising doubts about the successful incorporation of constraints into the generated model. This paper describes in detail the two-stage training method for Neural ODEs, a simple, effective, and penalty parameter-free approach to model constrained systems. In this approach, the constrained optimization problem is rewritten as two optimization subproblems that are solved in two stages. The first stage aims at finding feasible NN parameters by minimizing a measure of constraints violation. The second stage aims to find the optimal NN parameters by minimizing the loss function while keeping inside the feasible region. We experimentally demonstrate that our method produces models that satisfy the constraints and also improves their predictive performance, thus ensuring compliance with critical system properties and also contributing to reducing data quantity requirements. Furthermore, we show that the proposed method improves the convergence to an optimal solution and improves the explainability of Neural ODE models. Our proposed two-stage training method can be used with any NN architectures.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.40
自引率
5.90%
发文量
91
期刊介绍: The Journal of Forecasting is an international journal that publishes refereed papers on forecasting. It is multidisciplinary, welcoming papers dealing with any aspect of forecasting: theoretical, practical, computational and methodological. A broad interpretation of the topic is taken with approaches from various subject areas, such as statistics, economics, psychology, systems engineering and social sciences, all encouraged. Furthermore, the Journal welcomes a wide diversity of applications in such fields as business, government, technology and the environment. Of particular interest are papers dealing with modelling issues and the relationship of forecasting systems to decision-making processes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信