Off-policy reinforcement-learning-based fault-tolerant H control for topside separation systems with time-varying uncertainties.

Yuguang Zhang, Juan Wang, Shaobao Li, Xiaoyuan Luo, Xinping Guan
{"title":"Off-policy reinforcement-learning-based fault-tolerant H<sub>∞</sub> control for topside separation systems with time-varying uncertainties.","authors":"Yuguang Zhang, Juan Wang, Shaobao Li, Xiaoyuan Luo, Xinping Guan","doi":"10.1016/j.isatra.2024.11.002","DOIUrl":null,"url":null,"abstract":"<p><p>The topside separation system plays a pivotal role in the treatment of produced water within offshore oil and gas production operations. Due to high-humidity and salt-infested marine environments, topside separation systems are susceptible to dynamic model variations and valve faults. In this work, fault-tolerant control (FTC) of topside separation systems subject to structural uncertainties and slugging disturbances is studied. The system is configured as a cascade structure, comprising a water level control subsystem and a pressure-drop-ratio (PDR) control subsystem. A fault-tolerant H<sub>∞</sub> control framework is developed to cope with actuator faults and slugging disturbances. To enhance control performance in the presence of actuator faults and model uncertainties while reducing sensitivity to slugging disturbances, the fault-tolerant H<sub>∞</sub> control problem for the topside separation system is established as the two-player differential game problem. In addition, a Nash equilibrium solution for the fault-tolerant H<sub>∞</sub> control problem is achieved through the solution of the game algebraic Riccati equation (GARE). A model-free approach is presented to implement the proposed fault-tolerant H<sub>∞</sub> control method using off-policy reinforcement learning (RL). Simulation studies demonstrate the effectiveness of the solution.</p>","PeriodicalId":94059,"journal":{"name":"ISA transactions","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISA transactions","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.isatra.2024.11.002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The topside separation system plays a pivotal role in the treatment of produced water within offshore oil and gas production operations. Due to high-humidity and salt-infested marine environments, topside separation systems are susceptible to dynamic model variations and valve faults. In this work, fault-tolerant control (FTC) of topside separation systems subject to structural uncertainties and slugging disturbances is studied. The system is configured as a cascade structure, comprising a water level control subsystem and a pressure-drop-ratio (PDR) control subsystem. A fault-tolerant H control framework is developed to cope with actuator faults and slugging disturbances. To enhance control performance in the presence of actuator faults and model uncertainties while reducing sensitivity to slugging disturbances, the fault-tolerant H control problem for the topside separation system is established as the two-player differential game problem. In addition, a Nash equilibrium solution for the fault-tolerant H control problem is achieved through the solution of the game algebraic Riccati equation (GARE). A model-free approach is presented to implement the proposed fault-tolerant H control method using off-policy reinforcement learning (RL). Simulation studies demonstrate the effectiveness of the solution.

基于非策略强化学习的容错 H∞ 控制,用于具有时变不确定性的顶部分离系统。
在近海油气生产作业中,顶部分离系统在处理采出水方面发挥着关键作用。由于海洋环境湿度大、盐分多,顶部分离系统很容易受到动态模型变化和阀门故障的影响。在这项工作中,研究了受结构不确定性和抽吸干扰影响的顶部分离系统的容错控制(FTC)。该系统采用级联结构,由水位控制子系统和压降比 (PDR) 控制子系统组成。开发了一个容错 H∞ 控制框架,以应对执行器故障和淤积干扰。为了提高执行器故障和模型不确定性情况下的控制性能,同时降低对拖曳干扰的敏感性,将上部分离系统的容错 H∞ 控制问题建立为双人微分博弈问题。此外,还通过求解博弈代数里卡提方程(GARE)实现了容错 H∞ 控制问题的纳什均衡解。本文提出了一种无模型方法,利用非策略强化学习(RL)来实现所提出的容错 H∞ 控制方法。仿真研究证明了该解决方案的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信