利用部分观测数据在高干扰环境中对机翼俯仰控制进行深度强化学习

IF 2.5 3区 物理与天体物理 Q2 PHYSICS, FLUIDS & PLASMAS
Diederik Beckers, Jeff D. Eldredge
{"title":"利用部分观测数据在高干扰环境中对机翼俯仰控制进行深度强化学习","authors":"Diederik Beckers, Jeff D. Eldredge","doi":"10.1103/physrevfluids.9.093902","DOIUrl":null,"url":null,"abstract":"This study explores the application of deep reinforcement learning (RL) to design an airfoil pitch controller capable of minimizing lift variations in randomly disturbed flows. The controller, treated as an agent in a partially observable Markov decision process, receives non-Markovian observations from the environment, simulating practical constraints where flow information is limited to force and pressure sensors. Deep RL, particularly the TD3 algorithm, is used to approximate an optimal control policy under such conditions. Testing is conducted for a flat plate airfoil in two environments: a classical unsteady environment with vertical acceleration disturbances (i.e., a Wagner setup) and a viscous flow model with pulsed point force disturbances. In both cases, augmenting observations of the lift, pitch angle, and angular velocity with extra wake information (e.g., from pressure sensors) and retaining memory of past observations enhances RL control performance. Results demonstrate the capability of RL control to match or exceed standard linear controllers in minimizing lift variations. Special attention is given to the choice of training data and the generalization to unseen disturbances.","PeriodicalId":20160,"journal":{"name":"Physical Review Fluids","volume":"1 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep reinforcement learning of airfoil pitch control in a highly disturbed environment using partial observations\",\"authors\":\"Diederik Beckers, Jeff D. Eldredge\",\"doi\":\"10.1103/physrevfluids.9.093902\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study explores the application of deep reinforcement learning (RL) to design an airfoil pitch controller capable of minimizing lift variations in randomly disturbed flows. The controller, treated as an agent in a partially observable Markov decision process, receives non-Markovian observations from the environment, simulating practical constraints where flow information is limited to force and pressure sensors. Deep RL, particularly the TD3 algorithm, is used to approximate an optimal control policy under such conditions. Testing is conducted for a flat plate airfoil in two environments: a classical unsteady environment with vertical acceleration disturbances (i.e., a Wagner setup) and a viscous flow model with pulsed point force disturbances. In both cases, augmenting observations of the lift, pitch angle, and angular velocity with extra wake information (e.g., from pressure sensors) and retaining memory of past observations enhances RL control performance. Results demonstrate the capability of RL control to match or exceed standard linear controllers in minimizing lift variations. Special attention is given to the choice of training data and the generalization to unseen disturbances.\",\"PeriodicalId\":20160,\"journal\":{\"name\":\"Physical Review Fluids\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Physical Review Fluids\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.1103/physrevfluids.9.093902\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PHYSICS, FLUIDS & PLASMAS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physical Review Fluids","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1103/physrevfluids.9.093902","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSICS, FLUIDS & PLASMAS","Score":null,"Total":0}
引用次数: 0

摘要

本研究探索了深度强化学习(RL)在机翼俯仰控制器设计中的应用,该控制器能够最大限度地减少随机扰动气流中的升力变化。控制器被视为部分可观测马尔可夫决策过程中的一个代理,接收来自环境的非马尔可夫观测数据,模拟实际限制条件,即流动信息仅限于力和压力传感器。深度 RL,特别是 TD3 算法,被用来近似这种条件下的最优控制策略。在两种环境下对平板翼面进行了测试:一种是具有垂直加速度干扰的经典非稳态环境(即瓦格纳设置),另一种是具有脉冲点力干扰的粘性流模型。在这两种情况下,利用额外的尾流信息(如来自压力传感器的信息)增强对升力、俯仰角和角速度的观测,并保留对过去观测的记忆,都能提高 RL 控制性能。结果表明,在最大限度地减少升力变化方面,RL 控制能够与标准线性控制器相媲美,甚至更胜一筹。对训练数据的选择和对未知干扰的泛化给予了特别关注。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Deep reinforcement learning of airfoil pitch control in a highly disturbed environment using partial observations

Deep reinforcement learning of airfoil pitch control in a highly disturbed environment using partial observations
This study explores the application of deep reinforcement learning (RL) to design an airfoil pitch controller capable of minimizing lift variations in randomly disturbed flows. The controller, treated as an agent in a partially observable Markov decision process, receives non-Markovian observations from the environment, simulating practical constraints where flow information is limited to force and pressure sensors. Deep RL, particularly the TD3 algorithm, is used to approximate an optimal control policy under such conditions. Testing is conducted for a flat plate airfoil in two environments: a classical unsteady environment with vertical acceleration disturbances (i.e., a Wagner setup) and a viscous flow model with pulsed point force disturbances. In both cases, augmenting observations of the lift, pitch angle, and angular velocity with extra wake information (e.g., from pressure sensors) and retaining memory of past observations enhances RL control performance. Results demonstrate the capability of RL control to match or exceed standard linear controllers in minimizing lift variations. Special attention is given to the choice of training data and the generalization to unseen disturbances.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Physical Review Fluids
Physical Review Fluids Chemical Engineering-Fluid Flow and Transfer Processes
CiteScore
5.10
自引率
11.10%
发文量
488
期刊介绍: Physical Review Fluids is APS’s newest online-only journal dedicated to publishing innovative research that will significantly advance the fundamental understanding of fluid dynamics. Physical Review Fluids expands the scope of the APS journals to include additional areas of fluid dynamics research, complements the existing Physical Review collection, and maintains the same quality and reputation that authors and subscribers expect from APS. The journal is published with the endorsement of the APS Division of Fluid Dynamics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信