{"title":"基于物理信息神经网络的主动流量控制高效深度强化学习策略","authors":"Wulong Hu, Zhangze Jiang, Mingyang Xu, Hanyu Hu","doi":"10.1063/5.0213256","DOIUrl":null,"url":null,"abstract":"Reducing the reliance on intrusive flow probes is a critical task in active flow control based on deep reinforcement learning (DRL). Although a scarcity of flow data captured by probes adversely impacts the control proficiency of the DRL agent, leading to suboptimal flow modulation, minimizing the use of redundant probes significantly reduces the overall implementation costs, making the control strategy more economically viable. In this paper, we propose an active flow control method based on physics-informed DRL. This method integrates a physics-informed neural network into the DRL framework, harnessing the inherent physical characteristics of the flow field using strategically placed probes. We analyze the impact of probe placement, probe quantity, and DRL agent sampling strategies on the fidelity of flow predictions and the efficacy of flow control. Using the wake control of a two-dimensional cylinder flow with a Reynolds number of 100 as a case study, we position a specific number of flow probes within the flow field to gather pertinent information. When benchmarked against traditional DRL techniques, the results are unequivocal: in terms of training efficiency, physics-informed DRL reduces the training cycle by up to 30 rounds. Furthermore, by decreasing the number of flow probes in the flow field from 164 to just 4, the physics-based DRL achieves superior drag reduction through more precise control. Notably, compared to traditional DRL control, the drag reduction effect is enhanced by a significant 6%.","PeriodicalId":509470,"journal":{"name":"Physics of Fluids","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient deep reinforcement learning strategies for active flow control based on physics-informed neural networks\",\"authors\":\"Wulong Hu, Zhangze Jiang, Mingyang Xu, Hanyu Hu\",\"doi\":\"10.1063/5.0213256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reducing the reliance on intrusive flow probes is a critical task in active flow control based on deep reinforcement learning (DRL). Although a scarcity of flow data captured by probes adversely impacts the control proficiency of the DRL agent, leading to suboptimal flow modulation, minimizing the use of redundant probes significantly reduces the overall implementation costs, making the control strategy more economically viable. In this paper, we propose an active flow control method based on physics-informed DRL. This method integrates a physics-informed neural network into the DRL framework, harnessing the inherent physical characteristics of the flow field using strategically placed probes. We analyze the impact of probe placement, probe quantity, and DRL agent sampling strategies on the fidelity of flow predictions and the efficacy of flow control. Using the wake control of a two-dimensional cylinder flow with a Reynolds number of 100 as a case study, we position a specific number of flow probes within the flow field to gather pertinent information. When benchmarked against traditional DRL techniques, the results are unequivocal: in terms of training efficiency, physics-informed DRL reduces the training cycle by up to 30 rounds. Furthermore, by decreasing the number of flow probes in the flow field from 164 to just 4, the physics-based DRL achieves superior drag reduction through more precise control. Notably, compared to traditional DRL control, the drag reduction effect is enhanced by a significant 6%.\",\"PeriodicalId\":509470,\"journal\":{\"name\":\"Physics of Fluids\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physics of Fluids\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1063/5.0213256\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics of Fluids","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1063/5.0213256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Efficient deep reinforcement learning strategies for active flow control based on physics-informed neural networks
Reducing the reliance on intrusive flow probes is a critical task in active flow control based on deep reinforcement learning (DRL). Although a scarcity of flow data captured by probes adversely impacts the control proficiency of the DRL agent, leading to suboptimal flow modulation, minimizing the use of redundant probes significantly reduces the overall implementation costs, making the control strategy more economically viable. In this paper, we propose an active flow control method based on physics-informed DRL. This method integrates a physics-informed neural network into the DRL framework, harnessing the inherent physical characteristics of the flow field using strategically placed probes. We analyze the impact of probe placement, probe quantity, and DRL agent sampling strategies on the fidelity of flow predictions and the efficacy of flow control. Using the wake control of a two-dimensional cylinder flow with a Reynolds number of 100 as a case study, we position a specific number of flow probes within the flow field to gather pertinent information. When benchmarked against traditional DRL techniques, the results are unequivocal: in terms of training efficiency, physics-informed DRL reduces the training cycle by up to 30 rounds. Furthermore, by decreasing the number of flow probes in the flow field from 164 to just 4, the physics-based DRL achieves superior drag reduction through more precise control. Notably, compared to traditional DRL control, the drag reduction effect is enhanced by a significant 6%.