Mohamad Abed El Rahman Hammoud, Naila Raboudi, Edriss S. Titi, Omar Knio, Ibrahim Hoteit
{"title":"利用深度强化学习实现混沌系统中的数据同化","authors":"Mohamad Abed El Rahman Hammoud, Naila Raboudi, Edriss S. Titi, Omar Knio, Ibrahim Hoteit","doi":"10.1029/2023MS004178","DOIUrl":null,"url":null,"abstract":"<p>Data assimilation (DA) plays a pivotal role in diverse applications, ranging from weather forecasting to trajectory planning for autonomous vehicles. A prime example is the widely used ensemble Kalman filter (EnKF), which relies on the Kalman filter's linear update equation to correct each of the ensemble forecast member's state with incoming observations. Recent advancements have witnessed the emergence of deep learning approaches in this domain, primarily within a supervised learning framework. However, the adaptability of such models to untrained scenarios remains a challenge. In this study, we introduce a new DA strategy that utilizes reinforcement learning (RL) to apply state corrections using full or partial observations of the state variables. Our investigation focuses on demonstrating this approach to the chaotic Lorenz 63 and 96 systems, where the agent's objective is to maximize the geometric series with terms that are proportional to the negative root-mean-squared error (RMSE) between the observations and corresponding forecast states. Consequently, the agent develops a correction strategy, enhancing model forecasts based on available observations. Our strategy employs a stochastic action policy, enabling a Monte Carlo-based DA framework that relies on randomly sampling the policy to generate an ensemble of assimilated realizations. Numerical results demonstrate that the developed RL algorithm performs favorably when compared to the EnKF. Additionally, we illustrate the agent's capability to assimilate non-Gaussian observations, addressing one of the limitations of the EnKF.</p>","PeriodicalId":14881,"journal":{"name":"Journal of Advances in Modeling Earth Systems","volume":"16 8","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1029/2023MS004178","citationCount":"0","resultStr":"{\"title\":\"Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning\",\"authors\":\"Mohamad Abed El Rahman Hammoud, Naila Raboudi, Edriss S. Titi, Omar Knio, Ibrahim Hoteit\",\"doi\":\"10.1029/2023MS004178\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Data assimilation (DA) plays a pivotal role in diverse applications, ranging from weather forecasting to trajectory planning for autonomous vehicles. A prime example is the widely used ensemble Kalman filter (EnKF), which relies on the Kalman filter's linear update equation to correct each of the ensemble forecast member's state with incoming observations. Recent advancements have witnessed the emergence of deep learning approaches in this domain, primarily within a supervised learning framework. However, the adaptability of such models to untrained scenarios remains a challenge. In this study, we introduce a new DA strategy that utilizes reinforcement learning (RL) to apply state corrections using full or partial observations of the state variables. Our investigation focuses on demonstrating this approach to the chaotic Lorenz 63 and 96 systems, where the agent's objective is to maximize the geometric series with terms that are proportional to the negative root-mean-squared error (RMSE) between the observations and corresponding forecast states. Consequently, the agent develops a correction strategy, enhancing model forecasts based on available observations. Our strategy employs a stochastic action policy, enabling a Monte Carlo-based DA framework that relies on randomly sampling the policy to generate an ensemble of assimilated realizations. Numerical results demonstrate that the developed RL algorithm performs favorably when compared to the EnKF. Additionally, we illustrate the agent's capability to assimilate non-Gaussian observations, addressing one of the limitations of the EnKF.</p>\",\"PeriodicalId\":14881,\"journal\":{\"name\":\"Journal of Advances in Modeling Earth Systems\",\"volume\":\"16 8\",\"pages\":\"\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1029/2023MS004178\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Advances in Modeling Earth Systems\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1029/2023MS004178\",\"RegionNum\":2,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"METEOROLOGY & ATMOSPHERIC SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Advances in Modeling Earth Systems","FirstCategoryId":"89","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1029/2023MS004178","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"METEOROLOGY & ATMOSPHERIC SCIENCES","Score":null,"Total":0}
Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning
Data assimilation (DA) plays a pivotal role in diverse applications, ranging from weather forecasting to trajectory planning for autonomous vehicles. A prime example is the widely used ensemble Kalman filter (EnKF), which relies on the Kalman filter's linear update equation to correct each of the ensemble forecast member's state with incoming observations. Recent advancements have witnessed the emergence of deep learning approaches in this domain, primarily within a supervised learning framework. However, the adaptability of such models to untrained scenarios remains a challenge. In this study, we introduce a new DA strategy that utilizes reinforcement learning (RL) to apply state corrections using full or partial observations of the state variables. Our investigation focuses on demonstrating this approach to the chaotic Lorenz 63 and 96 systems, where the agent's objective is to maximize the geometric series with terms that are proportional to the negative root-mean-squared error (RMSE) between the observations and corresponding forecast states. Consequently, the agent develops a correction strategy, enhancing model forecasts based on available observations. Our strategy employs a stochastic action policy, enabling a Monte Carlo-based DA framework that relies on randomly sampling the policy to generate an ensemble of assimilated realizations. Numerical results demonstrate that the developed RL algorithm performs favorably when compared to the EnKF. Additionally, we illustrate the agent's capability to assimilate non-Gaussian observations, addressing one of the limitations of the EnKF.
期刊介绍:
The Journal of Advances in Modeling Earth Systems (JAMES) is committed to advancing the science of Earth systems modeling by offering high-quality scientific research through online availability and open access licensing. JAMES invites authors and readers from the international Earth systems modeling community.
Open access. Articles are available free of charge for everyone with Internet access to view and download.
Formal peer review.
Supplemental material, such as code samples, images, and visualizations, is published at no additional charge.
No additional charge for color figures.
Modest page charges to cover production costs.
Articles published in high-quality full text PDF, HTML, and XML.
Internal and external reference linking, DOI registration, and forward linking via CrossRef.