David Huergo , Martín De Frutos , Eduardo Jané , Oscar A. Marino , Gonzalo Rubio , Esteban Ferrer
{"title":"Reinforcement learning for anisotropic p-adaptation and error estimation in high-order solvers","authors":"David Huergo , Martín De Frutos , Eduardo Jané , Oscar A. Marino , Gonzalo Rubio , Esteban Ferrer","doi":"10.1016/j.jcp.2025.114080","DOIUrl":null,"url":null,"abstract":"<div><div>We present a novel approach to automate and optimize anisotropic p-adaptation in high-order h/p solvers using Reinforcement Learning (RL). The dynamic RL adaptation uses the evolving solution to adjust the high-order polynomials. We develop an offline training approach, decoupled from the main solver, which shows minimal overcost when performing simulations. In addition, we derive an inexpensive RL-based error estimation approach that enables the quantification of local discretization errors. The proposed methodology is agnostic to both the computational mesh and the partial differential equation to be solved.</div><div>The application of RL to mesh adaptation offers several benefits. It enables automated and adaptive mesh refinement, reducing the need for manual intervention. It optimizes computational resources by dynamically allocating high-order polynomials where necessary and minimizing refinement in stable regions. This leads to computational cost savings while maintaining the accuracy of the solution. Furthermore, RL allows for the exploration of unconventional mesh adaptations, potentially enhancing the accuracy and robustness of simulations. This work extends our original research in [1], offering a more robust, reproducible, and generalizable approach applicable to complex three-dimensional problems. We provide validation for laminar and turbulent cases: circular cylinders, Taylor Green Vortex and a 10MW wind turbine to illustrate the flexibility of the proposed approach.</div></div>","PeriodicalId":352,"journal":{"name":"Journal of Computational Physics","volume":"536 ","pages":"Article 114080"},"PeriodicalIF":3.8000,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Physics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0021999125003638","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
We present a novel approach to automate and optimize anisotropic p-adaptation in high-order h/p solvers using Reinforcement Learning (RL). The dynamic RL adaptation uses the evolving solution to adjust the high-order polynomials. We develop an offline training approach, decoupled from the main solver, which shows minimal overcost when performing simulations. In addition, we derive an inexpensive RL-based error estimation approach that enables the quantification of local discretization errors. The proposed methodology is agnostic to both the computational mesh and the partial differential equation to be solved.
The application of RL to mesh adaptation offers several benefits. It enables automated and adaptive mesh refinement, reducing the need for manual intervention. It optimizes computational resources by dynamically allocating high-order polynomials where necessary and minimizing refinement in stable regions. This leads to computational cost savings while maintaining the accuracy of the solution. Furthermore, RL allows for the exploration of unconventional mesh adaptations, potentially enhancing the accuracy and robustness of simulations. This work extends our original research in [1], offering a more robust, reproducible, and generalizable approach applicable to complex three-dimensional problems. We provide validation for laminar and turbulent cases: circular cylinders, Taylor Green Vortex and a 10MW wind turbine to illustrate the flexibility of the proposed approach.
我们提出了一种利用强化学习(RL)在高阶h/p求解器中自动化和优化各向异性p适应的新方法。动态强化学习自适应采用进化解来调整高阶多项式。我们开发了一种离线训练方法,与主求解器解耦,在执行模拟时显示最小的超额成本。此外,我们推导了一种廉价的基于rl的误差估计方法,该方法可以量化局部离散化误差。该方法对计算网格和待解偏微分方程都不可知。将强化学习应用于网格自适应有几个好处。它实现了自动化和自适应网格细化,减少了人工干预的需要。它通过动态分配高阶多项式来优化计算资源,并在稳定区域最小化精化。这可以节省计算成本,同时保持解决方案的准确性。此外,RL允许探索非常规网格适应,潜在地提高模拟的准确性和鲁棒性。这项工作扩展了我们在[1]的原始研究,提供了一种更健壮、可重复和可推广的方法,适用于复杂的三维问题。我们提供了层流和湍流情况的验证:圆形圆柱体,Taylor Green Vortex和10MW风力涡轮机,以说明所提出方法的灵活性。
期刊介绍:
Journal of Computational Physics thoroughly treats the computational aspects of physical problems, presenting techniques for the numerical solution of mathematical equations arising in all areas of physics. The journal seeks to emphasize methods that cross disciplinary boundaries.
The Journal of Computational Physics also publishes short notes of 4 pages or less (including figures, tables, and references but excluding title pages). Letters to the Editor commenting on articles already published in this Journal will also be considered. Neither notes nor letters should have an abstract.