基于物理信息神经网络的主动流量控制高效深度强化学习策略

Wulong Hu, Zhangze Jiang, Mingyang Xu, Hanyu Hu
{"title":"基于物理信息神经网络的主动流量控制高效深度强化学习策略","authors":"Wulong Hu, Zhangze Jiang, Mingyang Xu, Hanyu Hu","doi":"10.1063/5.0213256","DOIUrl":null,"url":null,"abstract":"Reducing the reliance on intrusive flow probes is a critical task in active flow control based on deep reinforcement learning (DRL). Although a scarcity of flow data captured by probes adversely impacts the control proficiency of the DRL agent, leading to suboptimal flow modulation, minimizing the use of redundant probes significantly reduces the overall implementation costs, making the control strategy more economically viable. In this paper, we propose an active flow control method based on physics-informed DRL. This method integrates a physics-informed neural network into the DRL framework, harnessing the inherent physical characteristics of the flow field using strategically placed probes. We analyze the impact of probe placement, probe quantity, and DRL agent sampling strategies on the fidelity of flow predictions and the efficacy of flow control. Using the wake control of a two-dimensional cylinder flow with a Reynolds number of 100 as a case study, we position a specific number of flow probes within the flow field to gather pertinent information. When benchmarked against traditional DRL techniques, the results are unequivocal: in terms of training efficiency, physics-informed DRL reduces the training cycle by up to 30 rounds. Furthermore, by decreasing the number of flow probes in the flow field from 164 to just 4, the physics-based DRL achieves superior drag reduction through more precise control. Notably, compared to traditional DRL control, the drag reduction effect is enhanced by a significant 6%.","PeriodicalId":509470,"journal":{"name":"Physics of Fluids","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient deep reinforcement learning strategies for active flow control based on physics-informed neural networks\",\"authors\":\"Wulong Hu, Zhangze Jiang, Mingyang Xu, Hanyu Hu\",\"doi\":\"10.1063/5.0213256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reducing the reliance on intrusive flow probes is a critical task in active flow control based on deep reinforcement learning (DRL). Although a scarcity of flow data captured by probes adversely impacts the control proficiency of the DRL agent, leading to suboptimal flow modulation, minimizing the use of redundant probes significantly reduces the overall implementation costs, making the control strategy more economically viable. In this paper, we propose an active flow control method based on physics-informed DRL. This method integrates a physics-informed neural network into the DRL framework, harnessing the inherent physical characteristics of the flow field using strategically placed probes. We analyze the impact of probe placement, probe quantity, and DRL agent sampling strategies on the fidelity of flow predictions and the efficacy of flow control. Using the wake control of a two-dimensional cylinder flow with a Reynolds number of 100 as a case study, we position a specific number of flow probes within the flow field to gather pertinent information. When benchmarked against traditional DRL techniques, the results are unequivocal: in terms of training efficiency, physics-informed DRL reduces the training cycle by up to 30 rounds. Furthermore, by decreasing the number of flow probes in the flow field from 164 to just 4, the physics-based DRL achieves superior drag reduction through more precise control. Notably, compared to traditional DRL control, the drag reduction effect is enhanced by a significant 6%.\",\"PeriodicalId\":509470,\"journal\":{\"name\":\"Physics of Fluids\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physics of Fluids\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1063/5.0213256\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics of Fluids","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1063/5.0213256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

减少对侵入式流量探测器的依赖是基于深度强化学习(DRL)的主动流量控制的一项关键任务。虽然探测器捕获的流量数据稀缺会对 DRL 代理的控制能力产生不利影响,导致次优流量调节,但尽量减少使用冗余探测器可显著降低总体实施成本,使控制策略更具经济可行性。本文提出了一种基于物理信息 DRL 的主动流量控制方法。该方法将物理信息神经网络集成到 DRL 框架中,利用战略性放置探头来利用流场固有的物理特性。我们分析了探针位置、探针数量和 DRL 代理采样策略对流动预测保真度和流动控制效果的影响。以雷诺数为 100 的二维圆柱体流的尾流控制为例,我们在流场中放置了特定数量的流动探针,以收集相关信息。与传统的 DRL 技术相比,结果显而易见:就训练效率而言,物理信息 DRL 最多可缩短 30 轮训练周期。此外,通过将流场中的流动探针数量从 164 个减少到仅 4 个,基于物理的 DRL 通过更精确的控制实现了出色的阻力减小效果。值得注意的是,与传统的 DRL 控制相比,阻力减少效果显著提高了 6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficient deep reinforcement learning strategies for active flow control based on physics-informed neural networks
Reducing the reliance on intrusive flow probes is a critical task in active flow control based on deep reinforcement learning (DRL). Although a scarcity of flow data captured by probes adversely impacts the control proficiency of the DRL agent, leading to suboptimal flow modulation, minimizing the use of redundant probes significantly reduces the overall implementation costs, making the control strategy more economically viable. In this paper, we propose an active flow control method based on physics-informed DRL. This method integrates a physics-informed neural network into the DRL framework, harnessing the inherent physical characteristics of the flow field using strategically placed probes. We analyze the impact of probe placement, probe quantity, and DRL agent sampling strategies on the fidelity of flow predictions and the efficacy of flow control. Using the wake control of a two-dimensional cylinder flow with a Reynolds number of 100 as a case study, we position a specific number of flow probes within the flow field to gather pertinent information. When benchmarked against traditional DRL techniques, the results are unequivocal: in terms of training efficiency, physics-informed DRL reduces the training cycle by up to 30 rounds. Furthermore, by decreasing the number of flow probes in the flow field from 164 to just 4, the physics-based DRL achieves superior drag reduction through more precise control. Notably, compared to traditional DRL control, the drag reduction effect is enhanced by a significant 6%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信