Harsh H. Sawant, Rahul Gujar, Neeta Mandhare, M. J. Sable, Prashant K. Ambadekar, S. H. Gawande
{"title":"用于优化翼型形状的强化学习代理的比较分析","authors":"Harsh H. Sawant, Rahul Gujar, Neeta Mandhare, M. J. Sable, Prashant K. Ambadekar, S. H. Gawande","doi":"10.1002/fld.5395","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>This work investigates the optimization of airfoil shapes using various reinforcement learning (RL) algorithms, including Deep Deterministic Policy Gradient (DDPG), Twin Delayed Deep Deterministic Policy Gradient (TD3), and Trust Region Policy Optimization (TRPO). The primary objective is to enhance the aerodynamic performance of airfoils by maximizing lift forces across different angles of attack (AoA). The study compares the optimized airfoils against the standard NACA 2412 airfoil. The DDPG-optimized airfoil demonstrated superior performance at lower and moderate AoAs, while the TRPO-optimized airfoil excelled at higher AoAs. In contrast, the TD3-optimized airfoil consistently underperformed. The results indicate that RL algorithms, particularly DDPG and TRPO, can effectively improve airfoil designs, offering substantial benefits in lift generation. This paper underscores the potential of RL techniques in aerodynamic shape optimization, presenting significant implications for aerospace and related industries.</p>\n </div>","PeriodicalId":50348,"journal":{"name":"International Journal for Numerical Methods in Fluids","volume":"97 8","pages":"1142-1156"},"PeriodicalIF":1.7000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparative Analysis of Reinforcement Learning Agents for Optimizing Airfoil Shapes\",\"authors\":\"Harsh H. Sawant, Rahul Gujar, Neeta Mandhare, M. J. Sable, Prashant K. Ambadekar, S. H. Gawande\",\"doi\":\"10.1002/fld.5395\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>This work investigates the optimization of airfoil shapes using various reinforcement learning (RL) algorithms, including Deep Deterministic Policy Gradient (DDPG), Twin Delayed Deep Deterministic Policy Gradient (TD3), and Trust Region Policy Optimization (TRPO). The primary objective is to enhance the aerodynamic performance of airfoils by maximizing lift forces across different angles of attack (AoA). The study compares the optimized airfoils against the standard NACA 2412 airfoil. The DDPG-optimized airfoil demonstrated superior performance at lower and moderate AoAs, while the TRPO-optimized airfoil excelled at higher AoAs. In contrast, the TD3-optimized airfoil consistently underperformed. The results indicate that RL algorithms, particularly DDPG and TRPO, can effectively improve airfoil designs, offering substantial benefits in lift generation. This paper underscores the potential of RL techniques in aerodynamic shape optimization, presenting significant implications for aerospace and related industries.</p>\\n </div>\",\"PeriodicalId\":50348,\"journal\":{\"name\":\"International Journal for Numerical Methods in Fluids\",\"volume\":\"97 8\",\"pages\":\"1142-1156\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal for Numerical Methods in Fluids\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/fld.5395\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal for Numerical Methods in Fluids","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/fld.5395","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Comparative Analysis of Reinforcement Learning Agents for Optimizing Airfoil Shapes
This work investigates the optimization of airfoil shapes using various reinforcement learning (RL) algorithms, including Deep Deterministic Policy Gradient (DDPG), Twin Delayed Deep Deterministic Policy Gradient (TD3), and Trust Region Policy Optimization (TRPO). The primary objective is to enhance the aerodynamic performance of airfoils by maximizing lift forces across different angles of attack (AoA). The study compares the optimized airfoils against the standard NACA 2412 airfoil. The DDPG-optimized airfoil demonstrated superior performance at lower and moderate AoAs, while the TRPO-optimized airfoil excelled at higher AoAs. In contrast, the TD3-optimized airfoil consistently underperformed. The results indicate that RL algorithms, particularly DDPG and TRPO, can effectively improve airfoil designs, offering substantial benefits in lift generation. This paper underscores the potential of RL techniques in aerodynamic shape optimization, presenting significant implications for aerospace and related industries.
期刊介绍:
The International Journal for Numerical Methods in Fluids publishes refereed papers describing significant developments in computational methods that are applicable to scientific and engineering problems in fluid mechanics, fluid dynamics, micro and bio fluidics, and fluid-structure interaction. Numerical methods for solving ancillary equations, such as transport and advection and diffusion, are also relevant. The Editors encourage contributions in the areas of multi-physics, multi-disciplinary and multi-scale problems involving fluid subsystems, verification and validation, uncertainty quantification, and model reduction.
Numerical examples that illustrate the described methods or their accuracy are in general expected. Discussions of papers already in print are also considered. However, papers dealing strictly with applications of existing methods or dealing with areas of research that are not deemed to be cutting edge by the Editors will not be considered for review.
The journal publishes full-length papers, which should normally be less than 25 journal pages in length. Two-part papers are discouraged unless considered necessary by the Editors.