{"title":"Mitigating epidemic spread in complex networks based on deep reinforcement learning.","authors":"Jie Yang, Wenshuang Liu, Xi Zhang, Choujun Zhan","doi":"10.1063/5.0235689","DOIUrl":null,"url":null,"abstract":"<p><p>Complex networks are susceptible to contagious cascades, underscoring the urgency for effective epidemic mitigation strategies. While physical quarantine is a proven mitigation measure for mitigation, it can lead to substantial economic repercussions if not managed properly. This study presents an innovative approach to selecting quarantine targets within complex networks, aiming for an efficient and economic epidemic response. We model the epidemic spread in complex networks as a Markov chain, accounting for stochastic state transitions and node quarantines. We then leverage deep reinforcement learning (DRL) to design a quarantine strategy that minimizes both infection rates and quarantine costs through a sequence of strategic node quarantines. Our DRL agent is specifically trained with the proximal policy optimization algorithm to optimize these dual objectives. Through simulations in both synthetic small-world and real-world community networks, we demonstrate the efficacy of our strategy in controlling epidemics. Notably, we observe a non-linear pattern in the mitigation effect as the daily maximum quarantine scale increases: the mitigation rate is most pronounced at first but plateaus after reaching a critical threshold. This insight is crucial for setting the most effective epidemic mitigation parameters.</p>","PeriodicalId":9974,"journal":{"name":"Chaos","volume":"34 12","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1063/5.0235689","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Complex networks are susceptible to contagious cascades, underscoring the urgency for effective epidemic mitigation strategies. While physical quarantine is a proven mitigation measure for mitigation, it can lead to substantial economic repercussions if not managed properly. This study presents an innovative approach to selecting quarantine targets within complex networks, aiming for an efficient and economic epidemic response. We model the epidemic spread in complex networks as a Markov chain, accounting for stochastic state transitions and node quarantines. We then leverage deep reinforcement learning (DRL) to design a quarantine strategy that minimizes both infection rates and quarantine costs through a sequence of strategic node quarantines. Our DRL agent is specifically trained with the proximal policy optimization algorithm to optimize these dual objectives. Through simulations in both synthetic small-world and real-world community networks, we demonstrate the efficacy of our strategy in controlling epidemics. Notably, we observe a non-linear pattern in the mitigation effect as the daily maximum quarantine scale increases: the mitigation rate is most pronounced at first but plateaus after reaching a critical threshold. This insight is crucial for setting the most effective epidemic mitigation parameters.
期刊介绍:
Chaos: An Interdisciplinary Journal of Nonlinear Science is a peer-reviewed journal devoted to increasing the understanding of nonlinear phenomena and describing the manifestations in a manner comprehensible to researchers from a broad spectrum of disciplines.