高维均值场博弈的深度策略迭代

IF 3.5 2区 数学 Q1 MATHEMATICS, APPLIED
Mouhcine Assouli, Badr Missaoui
{"title":"高维均值场博弈的深度策略迭代","authors":"Mouhcine Assouli,&nbsp;Badr Missaoui","doi":"10.1016/j.amc.2024.128923","DOIUrl":null,"url":null,"abstract":"<div><p>This paper introduces Deep Policy Iteration (DPI), a novel approach that integrates the strengths of Neural Networks with the stability and convergence advantages of Policy Iteration (PI) to address high-dimensional stochastic Mean Field Games (MFG). DPI overcomes the limitations of PI, which is constrained by the curse of dimensionality to low-dimensional problems, by iteratively training three neural networks to solve PI equations and satisfy forward-backwards conditions. Our findings indicate that DPI achieves comparable convergence levels to the Mean Field Deep Galerkin Method (MFDGM), with additional advantages. Furthermore, deep learning techniques show promise in handling separable Hamiltonian cases where PI alone is less effective. DPI effectively manages high-dimensional problems, extending the applicability of PI to both separable and non-separable Hamiltonians.</p></div>","PeriodicalId":55496,"journal":{"name":"Applied Mathematics and Computation","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Policy Iteration for high-dimensional mean field games\",\"authors\":\"Mouhcine Assouli,&nbsp;Badr Missaoui\",\"doi\":\"10.1016/j.amc.2024.128923\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper introduces Deep Policy Iteration (DPI), a novel approach that integrates the strengths of Neural Networks with the stability and convergence advantages of Policy Iteration (PI) to address high-dimensional stochastic Mean Field Games (MFG). DPI overcomes the limitations of PI, which is constrained by the curse of dimensionality to low-dimensional problems, by iteratively training three neural networks to solve PI equations and satisfy forward-backwards conditions. Our findings indicate that DPI achieves comparable convergence levels to the Mean Field Deep Galerkin Method (MFDGM), with additional advantages. Furthermore, deep learning techniques show promise in handling separable Hamiltonian cases where PI alone is less effective. DPI effectively manages high-dimensional problems, extending the applicability of PI to both separable and non-separable Hamiltonians.</p></div>\",\"PeriodicalId\":55496,\"journal\":{\"name\":\"Applied Mathematics and Computation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2024-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Computation\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0096300324003849\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Computation","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0096300324003849","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

摘要

本文介绍了深度策略迭代(DPI),这是一种将神经网络的优势与策略迭代(PI)的稳定性和收敛性优势相结合的新方法,用于解决高维随机均场博弈(MFG)问题。PI 受限于低维问题的维度诅咒,DPI 通过迭代训练三个神经网络来求解 PI 方程并满足前向后向条件,从而克服了 PI 的局限性。我们的研究结果表明,DPI 达到了与平均场深度伽勒金方法(MFDGM)相当的收敛水平,并具有额外的优势。此外,深度学习技术在处理可分离哈密顿情况时也大有可为,而在这些情况下,单纯的 PI 方法效果较差。DPI 能有效处理高维问题,将 PI 的适用性扩展到可分离和不可分离的哈密顿。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Policy Iteration for high-dimensional mean field games

This paper introduces Deep Policy Iteration (DPI), a novel approach that integrates the strengths of Neural Networks with the stability and convergence advantages of Policy Iteration (PI) to address high-dimensional stochastic Mean Field Games (MFG). DPI overcomes the limitations of PI, which is constrained by the curse of dimensionality to low-dimensional problems, by iteratively training three neural networks to solve PI equations and satisfy forward-backwards conditions. Our findings indicate that DPI achieves comparable convergence levels to the Mean Field Deep Galerkin Method (MFDGM), with additional advantages. Furthermore, deep learning techniques show promise in handling separable Hamiltonian cases where PI alone is less effective. DPI effectively manages high-dimensional problems, extending the applicability of PI to both separable and non-separable Hamiltonians.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.90
自引率
10.00%
发文量
755
审稿时长
36 days
期刊介绍: Applied Mathematics and Computation addresses work at the interface between applied mathematics, numerical computation, and applications of systems – oriented ideas to the physical, biological, social, and behavioral sciences, and emphasizes papers of a computational nature focusing on new algorithms, their analysis and numerical results. In addition to presenting research papers, Applied Mathematics and Computation publishes review articles and single–topics issues.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信