Generative Representation Learning in Recurrent Neural Networks for Causal Timeseries Forecasting

Georgios Chatziparaskevas;Ioannis Mademlis;Ioannis Pitas
{"title":"Generative Representation Learning in Recurrent Neural Networks for Causal Timeseries Forecasting","authors":"Georgios Chatziparaskevas;Ioannis Mademlis;Ioannis Pitas","doi":"10.1109/TAI.2024.3446465","DOIUrl":null,"url":null,"abstract":"Feed-forward deep neural networks (DNNs) are the state of the art in timeseries forecasting. A particularly significant scenario is the causal one: when an arbitrary subset of variables of a given multivariate timeseries is specified as forecasting target, with the remaining ones (exogenous variables) \n<italic>causing</i>\n the target at each time instance. Then, the goal is to predict a temporal window of future target values, given a window of historical exogenous values. To this end, this article proposes a novel deep recurrent neural architecture, called generative-regressing recurrent neural network (GRRNN), which surpasses competing ones in causal forecasting evaluation metrics, by smartly combining generative learning and regression. During training, the generative module learns to synthesize historical target timeseries from historical exogenous inputs via conditional adversarial learning, thus internally encoding the input timeseries into semantically meaningful features. During a forward pass, these features are passed over as input to the regression module, which outputs the actual future target forecasts in a sequence-to-sequence fashion. Thus, the task of timeseries generation is synergistically combined with the task of timeseries forecasting, under an end-to-end multitask training setting. Methodologically, GRRNN contributes a novel augmentation of pure supervised learning, tailored to causal timeseries forecasting, which essentially forces the generative module to transform the historical exogenous timeseries to a more appropriate representation, before feeding it as input to the actual forecasting regressor. Extensive experimental evaluation on relevant public datasets obtained from disparate fields, ranging from air pollution data to sentiment analysis of social media posts, confirms that GRRNN achieves top performance in multistep long-term forecasting.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 12","pages":"6412-6425"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10643032/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Feed-forward deep neural networks (DNNs) are the state of the art in timeseries forecasting. A particularly significant scenario is the causal one: when an arbitrary subset of variables of a given multivariate timeseries is specified as forecasting target, with the remaining ones (exogenous variables) causing the target at each time instance. Then, the goal is to predict a temporal window of future target values, given a window of historical exogenous values. To this end, this article proposes a novel deep recurrent neural architecture, called generative-regressing recurrent neural network (GRRNN), which surpasses competing ones in causal forecasting evaluation metrics, by smartly combining generative learning and regression. During training, the generative module learns to synthesize historical target timeseries from historical exogenous inputs via conditional adversarial learning, thus internally encoding the input timeseries into semantically meaningful features. During a forward pass, these features are passed over as input to the regression module, which outputs the actual future target forecasts in a sequence-to-sequence fashion. Thus, the task of timeseries generation is synergistically combined with the task of timeseries forecasting, under an end-to-end multitask training setting. Methodologically, GRRNN contributes a novel augmentation of pure supervised learning, tailored to causal timeseries forecasting, which essentially forces the generative module to transform the historical exogenous timeseries to a more appropriate representation, before feeding it as input to the actual forecasting regressor. Extensive experimental evaluation on relevant public datasets obtained from disparate fields, ranging from air pollution data to sentiment analysis of social media posts, confirms that GRRNN achieves top performance in multistep long-term forecasting.
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信