{"title":"基于深度强化学习的翼型气动优化","authors":"Jinhua Lou, Rong-Yawn Chen, Jiaqi Liu, Yue Bao, Y. You, Zhengwu Chen","doi":"10.1063/5.0137002","DOIUrl":null,"url":null,"abstract":"The traditional optimization of airfoils relies on, and is limited by, the knowledge and experience of the designer. As a method of intelligent decision-making, reinforcement learning can be used for such optimization through self-directed learning. In this paper, we use the lift–drag ratio as the objective of optimization to propose a method for the aerodynamic optimization of airfoils based on a combination of deep learning and reinforcement learning. A deep neural network (DNN) is first constructed as a surrogate model to quickly predict the lift–drag ratio of the airfoil, and a double deep Q-network (double DQN) algorithm is then designed based on deep reinforcement learning to train the optimization policy. During the training phase, the agent uses geometric parameters of the airfoil to represent its state, adopts a stochastic policy to generate optimization experience, and uses a deterministic policy to modify the geometry of the airfoil. The DNN calculates changes in the lift–drag ratio of the airfoil as a reward, and the environment constantly feeds the states, actions, and rewards back to the agent, which dynamically updates the policy to retain positive optimization experience. The results of simulations show that the double DQN can learn the general policy for optimizing the airfoil to improve its lift–drag ratio to 71.46%. The optimization policy can be generalized to a variety of computational conditions. Therefore, the proposed method can rapidly predict the aerodynamic parameters of the airfoil and autonomously learn the optimization policy to render the entire process intelligent.","PeriodicalId":20066,"journal":{"name":"Physics of Fluids","volume":"96 1","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Aerodynamic optimization of airfoil based on deep reinforcement learning\",\"authors\":\"Jinhua Lou, Rong-Yawn Chen, Jiaqi Liu, Yue Bao, Y. You, Zhengwu Chen\",\"doi\":\"10.1063/5.0137002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The traditional optimization of airfoils relies on, and is limited by, the knowledge and experience of the designer. As a method of intelligent decision-making, reinforcement learning can be used for such optimization through self-directed learning. In this paper, we use the lift–drag ratio as the objective of optimization to propose a method for the aerodynamic optimization of airfoils based on a combination of deep learning and reinforcement learning. A deep neural network (DNN) is first constructed as a surrogate model to quickly predict the lift–drag ratio of the airfoil, and a double deep Q-network (double DQN) algorithm is then designed based on deep reinforcement learning to train the optimization policy. During the training phase, the agent uses geometric parameters of the airfoil to represent its state, adopts a stochastic policy to generate optimization experience, and uses a deterministic policy to modify the geometry of the airfoil. The DNN calculates changes in the lift–drag ratio of the airfoil as a reward, and the environment constantly feeds the states, actions, and rewards back to the agent, which dynamically updates the policy to retain positive optimization experience. The results of simulations show that the double DQN can learn the general policy for optimizing the airfoil to improve its lift–drag ratio to 71.46%. The optimization policy can be generalized to a variety of computational conditions. Therefore, the proposed method can rapidly predict the aerodynamic parameters of the airfoil and autonomously learn the optimization policy to render the entire process intelligent.\",\"PeriodicalId\":20066,\"journal\":{\"name\":\"Physics of Fluids\",\"volume\":\"96 1\",\"pages\":\"\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physics of Fluids\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1063/5.0137002\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MECHANICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics of Fluids","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1063/5.0137002","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MECHANICS","Score":null,"Total":0}
Aerodynamic optimization of airfoil based on deep reinforcement learning
The traditional optimization of airfoils relies on, and is limited by, the knowledge and experience of the designer. As a method of intelligent decision-making, reinforcement learning can be used for such optimization through self-directed learning. In this paper, we use the lift–drag ratio as the objective of optimization to propose a method for the aerodynamic optimization of airfoils based on a combination of deep learning and reinforcement learning. A deep neural network (DNN) is first constructed as a surrogate model to quickly predict the lift–drag ratio of the airfoil, and a double deep Q-network (double DQN) algorithm is then designed based on deep reinforcement learning to train the optimization policy. During the training phase, the agent uses geometric parameters of the airfoil to represent its state, adopts a stochastic policy to generate optimization experience, and uses a deterministic policy to modify the geometry of the airfoil. The DNN calculates changes in the lift–drag ratio of the airfoil as a reward, and the environment constantly feeds the states, actions, and rewards back to the agent, which dynamically updates the policy to retain positive optimization experience. The results of simulations show that the double DQN can learn the general policy for optimizing the airfoil to improve its lift–drag ratio to 71.46%. The optimization policy can be generalized to a variety of computational conditions. Therefore, the proposed method can rapidly predict the aerodynamic parameters of the airfoil and autonomously learn the optimization policy to render the entire process intelligent.
期刊介绍:
Physics of Fluids (PoF) is a preeminent journal devoted to publishing original theoretical, computational, and experimental contributions to the understanding of the dynamics of gases, liquids, and complex or multiphase fluids. Topics published in PoF are diverse and reflect the most important subjects in fluid dynamics, including, but not limited to:
-Acoustics
-Aerospace and aeronautical flow
-Astrophysical flow
-Biofluid mechanics
-Cavitation and cavitating flows
-Combustion flows
-Complex fluids
-Compressible flow
-Computational fluid dynamics
-Contact lines
-Continuum mechanics
-Convection
-Cryogenic flow
-Droplets
-Electrical and magnetic effects in fluid flow
-Foam, bubble, and film mechanics
-Flow control
-Flow instability and transition
-Flow orientation and anisotropy
-Flows with other transport phenomena
-Flows with complex boundary conditions
-Flow visualization
-Fluid mechanics
-Fluid physical properties
-Fluid–structure interactions
-Free surface flows
-Geophysical flow
-Interfacial flow
-Knudsen flow
-Laminar flow
-Liquid crystals
-Mathematics of fluids
-Micro- and nanofluid mechanics
-Mixing
-Molecular theory
-Nanofluidics
-Particulate, multiphase, and granular flow
-Processing flows
-Relativistic fluid mechanics
-Rotating flows
-Shock wave phenomena
-Soft matter
-Stratified flows
-Supercritical fluids
-Superfluidity
-Thermodynamics of flow systems
-Transonic flow
-Turbulent flow
-Viscous and non-Newtonian flow
-Viscoelasticity
-Vortex dynamics
-Waves