Model‐based reinforcement learning control of reaction‐diffusion problems

Christina Schenk, Aditya Vasudevan, Maciej Haranczyk, Ignacio Romero
{"title":"Model‐based reinforcement learning control of reaction‐diffusion problems","authors":"Christina Schenk, Aditya Vasudevan, Maciej Haranczyk, Ignacio Romero","doi":"10.1002/oca.3196","DOIUrl":null,"url":null,"abstract":"Mathematical and computational tools have proven to be reliable in decision‐making processes. In recent times, in particular, machine learning‐based methods are becoming increasingly popular as advanced support tools. When dealing with control problems, reinforcement learning has been applied to decision‐making in several applications, most notably in games. The success of these methods in finding solutions to complex problems motivates the exploration of new areas where they can be employed to overcome current difficulties. In this article, we explore the use of automatic control strategies to initial boundary value problems in thermal and disease transport. Specifically, in this work, we adapt an existing reinforcement learning algorithm using a stochastic policy gradient method and we introduce two novel reward functions to drive the flow of the transported field. The new model‐based framework exploits the interactions between a reaction‐diffusion model and the modified agent. The results show that certain controls can be implemented successfully in these applications, although model simplifications had to be assumed.","PeriodicalId":501055,"journal":{"name":"Optimal Control Applications and Methods","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optimal Control Applications and Methods","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/oca.3196","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Mathematical and computational tools have proven to be reliable in decision‐making processes. In recent times, in particular, machine learning‐based methods are becoming increasingly popular as advanced support tools. When dealing with control problems, reinforcement learning has been applied to decision‐making in several applications, most notably in games. The success of these methods in finding solutions to complex problems motivates the exploration of new areas where they can be employed to overcome current difficulties. In this article, we explore the use of automatic control strategies to initial boundary value problems in thermal and disease transport. Specifically, in this work, we adapt an existing reinforcement learning algorithm using a stochastic policy gradient method and we introduce two novel reward functions to drive the flow of the transported field. The new model‐based framework exploits the interactions between a reaction‐diffusion model and the modified agent. The results show that certain controls can be implemented successfully in these applications, although model simplifications had to be assumed.
基于模型的强化学习控制反应扩散问题
事实证明,数学和计算工具在决策过程中非常可靠。尤其是近来,基于机器学习的方法作为先进的辅助工具越来越受欢迎。在处理控制问题时,强化学习已被应用于多个领域的决策制定,尤其是游戏领域。这些方法在寻找复杂问题的解决方案方面所取得的成功,促使人们探索可以利用这些方法克服当前困难的新领域。在这篇文章中,我们探讨了如何将自动控制策略用于热和疾病传输中的初始边界值问题。具体来说,在这项工作中,我们使用随机策略梯度法调整了现有的强化学习算法,并引入了两个新颖的奖励函数来驱动传输场的流动。新的基于模型的框架利用了反应-扩散模型和修改后的代理之间的相互作用。结果表明,虽然必须对模型进行简化,但某些控制可以在这些应用中成功实施。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信