{"title":"基于深度强化学习的在线集成聚合时间序列预测","authors":"A. Saadallah, K. Morik","doi":"10.1109/DSAA53316.2021.9564132","DOIUrl":null,"url":null,"abstract":"Both complex and evolving nature of time series structure make forecasting among one of the most important and challenging tasks in time series analysis. Typical methods for forecasting are designed to model time-evolving dependencies between data observations. However, it is generally accepted that none of them is universally valid for every application. Therefore, methods for learning heterogeneous ensembles by combining a diverse set of forecasts together appear as a promising solution to tackle this task. Several approaches, ranging from simple and enhanced averaging tactics to applying meta-learning methods, have been proposed to learn how to combine individual models in an ensemble. However, finding the optimal strategy for ensemble aggregation remains an open research question, particularly, when the ensemble needs to be adapted in real time. In this paper, we leverage a deep reinforcement learning framework for learning linearly weighted ensembles as a meta-learning method. In this framework, the combination policy in ensembles is modelled as a sequential decision making process, and an actor-critic model aims at learning the optimal weights in a continuous action space. The policy is updated following a drift detection mechanism for tracking performance shifts of the ensemble model. An extensive empirical study on many real-world datasets demonstrates that our method achieves excellent or on par results in comparison to the state-of-the-art approaches as well as several baselines.","PeriodicalId":129612,"journal":{"name":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Online Ensemble Aggregation using Deep Reinforcement Learning for Time Series Forecasting\",\"authors\":\"A. Saadallah, K. Morik\",\"doi\":\"10.1109/DSAA53316.2021.9564132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Both complex and evolving nature of time series structure make forecasting among one of the most important and challenging tasks in time series analysis. Typical methods for forecasting are designed to model time-evolving dependencies between data observations. However, it is generally accepted that none of them is universally valid for every application. Therefore, methods for learning heterogeneous ensembles by combining a diverse set of forecasts together appear as a promising solution to tackle this task. Several approaches, ranging from simple and enhanced averaging tactics to applying meta-learning methods, have been proposed to learn how to combine individual models in an ensemble. However, finding the optimal strategy for ensemble aggregation remains an open research question, particularly, when the ensemble needs to be adapted in real time. In this paper, we leverage a deep reinforcement learning framework for learning linearly weighted ensembles as a meta-learning method. In this framework, the combination policy in ensembles is modelled as a sequential decision making process, and an actor-critic model aims at learning the optimal weights in a continuous action space. The policy is updated following a drift detection mechanism for tracking performance shifts of the ensemble model. An extensive empirical study on many real-world datasets demonstrates that our method achieves excellent or on par results in comparison to the state-of-the-art approaches as well as several baselines.\",\"PeriodicalId\":129612,\"journal\":{\"name\":\"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)\",\"volume\":\"134 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSAA53316.2021.9564132\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA53316.2021.9564132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Online Ensemble Aggregation using Deep Reinforcement Learning for Time Series Forecasting
Both complex and evolving nature of time series structure make forecasting among one of the most important and challenging tasks in time series analysis. Typical methods for forecasting are designed to model time-evolving dependencies between data observations. However, it is generally accepted that none of them is universally valid for every application. Therefore, methods for learning heterogeneous ensembles by combining a diverse set of forecasts together appear as a promising solution to tackle this task. Several approaches, ranging from simple and enhanced averaging tactics to applying meta-learning methods, have been proposed to learn how to combine individual models in an ensemble. However, finding the optimal strategy for ensemble aggregation remains an open research question, particularly, when the ensemble needs to be adapted in real time. In this paper, we leverage a deep reinforcement learning framework for learning linearly weighted ensembles as a meta-learning method. In this framework, the combination policy in ensembles is modelled as a sequential decision making process, and an actor-critic model aims at learning the optimal weights in a continuous action space. The policy is updated following a drift detection mechanism for tracking performance shifts of the ensemble model. An extensive empirical study on many real-world datasets demonstrates that our method achieves excellent or on par results in comparison to the state-of-the-art approaches as well as several baselines.