Xin He, Wenlu Zhao, Licheng Zhang, Qiushi Zhang, Xinyu Li
{"title":"A novel ensemble deep reinforcement learning model for short‐term load forecasting based on Q‐learning dynamic model selection","authors":"Xin He, Wenlu Zhao, Licheng Zhang, Qiushi Zhang, Xinyu Li","doi":"10.1049/tje2.12409","DOIUrl":null,"url":null,"abstract":"Short‐term load forecasting is critical for power system planning and operations, and ensemble forecasting methods for electricity loads have been shown to be effective in obtaining accurate forecasts. However, the weights in ensemble prediction models are usually preset based on the overall performance after training, which prevents the model from adapting in the face of different scenarios, limiting the improvement of prediction performance. In order to improve the accurateness and validity of the ensemble prediction method further, this paper proposes an ensemble deep reinforcement learning approach using Q‐learning dynamic weight assignment to consider local behaviours caused by changes in the external environment. Firstly, the variational mode decomposition is used to reduce the non‐stationarity of the original data by decomposing the load sequence. Then, the recurrent neural network (RNN), long short‐term memory (LSTM), and gated recurrent unit (GRU) are selected as the basic power load predictors. Finally, the optimal weights are ensembled for the three sub‐predictors by the optimal weights generated using the Q‐learning algorithm, and the final results are obtained by combining their respective predictions. The results show that the forecasting capability of the proposed method outperforms all sub‐models and several baseline ensemble forecasting methods.","PeriodicalId":510109,"journal":{"name":"The Journal of Engineering","volume":"10 7","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1049/tje2.12409","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Short‐term load forecasting is critical for power system planning and operations, and ensemble forecasting methods for electricity loads have been shown to be effective in obtaining accurate forecasts. However, the weights in ensemble prediction models are usually preset based on the overall performance after training, which prevents the model from adapting in the face of different scenarios, limiting the improvement of prediction performance. In order to improve the accurateness and validity of the ensemble prediction method further, this paper proposes an ensemble deep reinforcement learning approach using Q‐learning dynamic weight assignment to consider local behaviours caused by changes in the external environment. Firstly, the variational mode decomposition is used to reduce the non‐stationarity of the original data by decomposing the load sequence. Then, the recurrent neural network (RNN), long short‐term memory (LSTM), and gated recurrent unit (GRU) are selected as the basic power load predictors. Finally, the optimal weights are ensembled for the three sub‐predictors by the optimal weights generated using the Q‐learning algorithm, and the final results are obtained by combining their respective predictions. The results show that the forecasting capability of the proposed method outperforms all sub‐models and several baseline ensemble forecasting methods.