{"title":"Proximal policy optimization approach to stabilize the chaotic food web system","authors":"Liang Xu , Ru-Ru Ma , Jie Wu , Pengchun Rao","doi":"10.1016/j.chaos.2025.116033","DOIUrl":null,"url":null,"abstract":"<div><div>Chaos phenomena can be observed extensively in many real-world scenarios, which usually presents a challenge to suppress those undesired behaviors. Unlike the traditional linear and nonlinear control methods, this study introduces a deep reinforcement learning (DRL)-based scheme to regulate chaotic food web system (FWS). Specifically, we utilize the proximal policy optimization (PPO) algorithm to train the agent model, which does not necessitate the prior knowledge of chaotic FWS. Experimental results demonstrate that the developed DRL-based control scheme can effectively guide the FWS toward a predetermined stable state. Furthermore, this investigation considers the influence of environmental noise on the chaotic FWS, and we obtain the important result that incorporating noise during the training process can enhance the controller’s robustness and system adaptability.</div></div>","PeriodicalId":9764,"journal":{"name":"Chaos Solitons & Fractals","volume":"192 ","pages":"Article 116033"},"PeriodicalIF":5.3000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos Solitons & Fractals","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0960077925000463","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Chaos phenomena can be observed extensively in many real-world scenarios, which usually presents a challenge to suppress those undesired behaviors. Unlike the traditional linear and nonlinear control methods, this study introduces a deep reinforcement learning (DRL)-based scheme to regulate chaotic food web system (FWS). Specifically, we utilize the proximal policy optimization (PPO) algorithm to train the agent model, which does not necessitate the prior knowledge of chaotic FWS. Experimental results demonstrate that the developed DRL-based control scheme can effectively guide the FWS toward a predetermined stable state. Furthermore, this investigation considers the influence of environmental noise on the chaotic FWS, and we obtain the important result that incorporating noise during the training process can enhance the controller’s robustness and system adaptability.
期刊介绍:
Chaos, Solitons & Fractals strives to establish itself as a premier journal in the interdisciplinary realm of Nonlinear Science, Non-equilibrium, and Complex Phenomena. It welcomes submissions covering a broad spectrum of topics within this field, including dynamics, non-equilibrium processes in physics, chemistry, and geophysics, complex matter and networks, mathematical models, computational biology, applications to quantum and mesoscopic phenomena, fluctuations and random processes, self-organization, and social phenomena.