{"title":"AGCNT: Adaptive Graph Convolutional Network for Transformer-based Long Sequence Time-Series Forecasting","authors":"Hongyang Su, Xiaolong Wang, Yang Qin","doi":"10.1145/3459637.3482054","DOIUrl":null,"url":null,"abstract":"Long sequence time-series forecasting(LSTF) plays an important role in a variety of real-world application scenarios, such as electricity forecasting, weather forecasting, and traffic flow forecasting. It has previously been observed that transformer-based models have achieved outstanding results on LSTF tasks, which can reduce the complexity of the model and maintain stable prediction accuracy. Nevertheless, there are still some issues that limit the performance of transformer-based models for LSTF tasks: (i) the potential correlation between sequences is not considered; (ii) the inherent structure of encoder-decoder is difficult to expand after being optimized from the aspect of complexity. In order to solve these two problems, we propose a transformer-based model, named AGCNT, which is efficient and can capture the correlation between the sequences in the multivariate LSTF task without causing the memory bottleneck. Specifically, AGCNT has several characteristics: (i) a probsparse adaptive graph self-attention, which maps long sequences into a low-dimensional dense graph structure with an adaptive graph generation and captures the relationships between sequences with an adaptive graph convolution; (ii) the stacked encoder with distilling probsparse graph self-attention integrates the graph attention mechanism and retains the dominant attention of the cascade layer, which preserves the correlation between sparse queries from long sequences; (iii) the stacked decoder with generative inference generates all prediction values in one forward operation, which can improve the inference speed of long-term predictions. Experimental results on 4 large-scale datasets demonstrate the AGCNT outperforms state-of-the-art baselines.","PeriodicalId":405296,"journal":{"name":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","volume":"2169 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459637.3482054","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Long sequence time-series forecasting(LSTF) plays an important role in a variety of real-world application scenarios, such as electricity forecasting, weather forecasting, and traffic flow forecasting. It has previously been observed that transformer-based models have achieved outstanding results on LSTF tasks, which can reduce the complexity of the model and maintain stable prediction accuracy. Nevertheless, there are still some issues that limit the performance of transformer-based models for LSTF tasks: (i) the potential correlation between sequences is not considered; (ii) the inherent structure of encoder-decoder is difficult to expand after being optimized from the aspect of complexity. In order to solve these two problems, we propose a transformer-based model, named AGCNT, which is efficient and can capture the correlation between the sequences in the multivariate LSTF task without causing the memory bottleneck. Specifically, AGCNT has several characteristics: (i) a probsparse adaptive graph self-attention, which maps long sequences into a low-dimensional dense graph structure with an adaptive graph generation and captures the relationships between sequences with an adaptive graph convolution; (ii) the stacked encoder with distilling probsparse graph self-attention integrates the graph attention mechanism and retains the dominant attention of the cascade layer, which preserves the correlation between sparse queries from long sequences; (iii) the stacked decoder with generative inference generates all prediction values in one forward operation, which can improve the inference speed of long-term predictions. Experimental results on 4 large-scale datasets demonstrate the AGCNT outperforms state-of-the-art baselines.