Zhi-Sheng Xing , Guo-Qing Han , You-Liang Jia , Wei Tian , Hang-Fei Gong , Wen-Bo Jiang , Pei-Dong Mai , Xing-Yuan Liang
{"title":"基于强化学习的耦合井筒/油藏柱塞举升系统优化","authors":"Zhi-Sheng Xing , Guo-Qing Han , You-Liang Jia , Wei Tian , Hang-Fei Gong , Wen-Bo Jiang , Pei-Dong Mai , Xing-Yuan Liang","doi":"10.1016/j.petsci.2025.03.009","DOIUrl":null,"url":null,"abstract":"<div><div>In the mid-to-late stages of gas reservoir development, liquid loading in gas wells becomes a common challenge. Plunger lift, as an intermittent production technique, is widely used for deliquification in gas wells. With the advancement of big data and artificial intelligence, the future of oil and gas field development is trending towards intelligent, unmanned, and automated operations. Currently, the optimization of plunger lift working systems is primarily based on expert experience and manual control, focusing mainly on the success of the plunger lift without adequately considering the impact of different working systems on gas production. Additionally, liquid loading in gas wells is a dynamic process, and the intermittent nature of plunger lift requires accurate modeling; using constant inflow dynamics to describe reservoir flow introduces significant errors. To address these challenges, this study establishes a coupled wellbore–reservoir model for plunger lift wells and validates the computational wellhead pressure results against field measurements. Building on this model, a novel optimization control algorithm based on the deep deterministic policy gradient (DDPG) framework is proposed. The algorithm aims to optimize plunger lift working systems to balance overall reservoir pressure, stabilize gas–water ratios, and maximize gas production. Through simulation experiments in three different production optimization scenarios, the effectiveness of reinforcement learning algorithms (including RL, PPO, DQN, and the proposed DDPG) and traditional optimization algorithms (including GA, PSO, and Bayesian optimization) in enhancing production efficiency is compared. The results demonstrate that the coupled model provides highly accurate calculations and can precisely describe the transient production of wellbore and gas reservoir systems. The proposed DDPG algorithm achieves the highest reward value during training with minimal error, leading to a potential increase in cumulative gas production by up to 5% and cumulative liquid production by 252%. The DDPG algorithm exhibits robustness across different optimization scenarios, showcasing excellent adaptability and generalization capabilities.</div></div>","PeriodicalId":19938,"journal":{"name":"Petroleum Science","volume":"22 5","pages":"Pages 2154-2168"},"PeriodicalIF":6.1000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimization of plunger lift working systems using reinforcement learning for coupled wellbore/reservoir\",\"authors\":\"Zhi-Sheng Xing , Guo-Qing Han , You-Liang Jia , Wei Tian , Hang-Fei Gong , Wen-Bo Jiang , Pei-Dong Mai , Xing-Yuan Liang\",\"doi\":\"10.1016/j.petsci.2025.03.009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In the mid-to-late stages of gas reservoir development, liquid loading in gas wells becomes a common challenge. Plunger lift, as an intermittent production technique, is widely used for deliquification in gas wells. With the advancement of big data and artificial intelligence, the future of oil and gas field development is trending towards intelligent, unmanned, and automated operations. Currently, the optimization of plunger lift working systems is primarily based on expert experience and manual control, focusing mainly on the success of the plunger lift without adequately considering the impact of different working systems on gas production. Additionally, liquid loading in gas wells is a dynamic process, and the intermittent nature of plunger lift requires accurate modeling; using constant inflow dynamics to describe reservoir flow introduces significant errors. To address these challenges, this study establishes a coupled wellbore–reservoir model for plunger lift wells and validates the computational wellhead pressure results against field measurements. Building on this model, a novel optimization control algorithm based on the deep deterministic policy gradient (DDPG) framework is proposed. The algorithm aims to optimize plunger lift working systems to balance overall reservoir pressure, stabilize gas–water ratios, and maximize gas production. Through simulation experiments in three different production optimization scenarios, the effectiveness of reinforcement learning algorithms (including RL, PPO, DQN, and the proposed DDPG) and traditional optimization algorithms (including GA, PSO, and Bayesian optimization) in enhancing production efficiency is compared. The results demonstrate that the coupled model provides highly accurate calculations and can precisely describe the transient production of wellbore and gas reservoir systems. The proposed DDPG algorithm achieves the highest reward value during training with minimal error, leading to a potential increase in cumulative gas production by up to 5% and cumulative liquid production by 252%. The DDPG algorithm exhibits robustness across different optimization scenarios, showcasing excellent adaptability and generalization capabilities.</div></div>\",\"PeriodicalId\":19938,\"journal\":{\"name\":\"Petroleum Science\",\"volume\":\"22 5\",\"pages\":\"Pages 2154-2168\"},\"PeriodicalIF\":6.1000,\"publicationDate\":\"2025-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Petroleum Science\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1995822625000688\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Petroleum Science","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1995822625000688","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
Optimization of plunger lift working systems using reinforcement learning for coupled wellbore/reservoir
In the mid-to-late stages of gas reservoir development, liquid loading in gas wells becomes a common challenge. Plunger lift, as an intermittent production technique, is widely used for deliquification in gas wells. With the advancement of big data and artificial intelligence, the future of oil and gas field development is trending towards intelligent, unmanned, and automated operations. Currently, the optimization of plunger lift working systems is primarily based on expert experience and manual control, focusing mainly on the success of the plunger lift without adequately considering the impact of different working systems on gas production. Additionally, liquid loading in gas wells is a dynamic process, and the intermittent nature of plunger lift requires accurate modeling; using constant inflow dynamics to describe reservoir flow introduces significant errors. To address these challenges, this study establishes a coupled wellbore–reservoir model for plunger lift wells and validates the computational wellhead pressure results against field measurements. Building on this model, a novel optimization control algorithm based on the deep deterministic policy gradient (DDPG) framework is proposed. The algorithm aims to optimize plunger lift working systems to balance overall reservoir pressure, stabilize gas–water ratios, and maximize gas production. Through simulation experiments in three different production optimization scenarios, the effectiveness of reinforcement learning algorithms (including RL, PPO, DQN, and the proposed DDPG) and traditional optimization algorithms (including GA, PSO, and Bayesian optimization) in enhancing production efficiency is compared. The results demonstrate that the coupled model provides highly accurate calculations and can precisely describe the transient production of wellbore and gas reservoir systems. The proposed DDPG algorithm achieves the highest reward value during training with minimal error, leading to a potential increase in cumulative gas production by up to 5% and cumulative liquid production by 252%. The DDPG algorithm exhibits robustness across different optimization scenarios, showcasing excellent adaptability and generalization capabilities.
期刊介绍:
Petroleum Science is the only English journal in China on petroleum science and technology that is intended for professionals engaged in petroleum science research and technical applications all over the world, as well as the managerial personnel of oil companies. It covers petroleum geology, petroleum geophysics, petroleum engineering, petrochemistry & chemical engineering, petroleum mechanics, and economic management. It aims to introduce the latest results in oil industry research in China, promote cooperation in petroleum science research between China and the rest of the world, and build a bridge for scientific communication between China and the world.