{"title":"MR-Transformer: Multiresolution Transformer for Multivariate Time Series Prediction.","authors":"Siying Zhu, Jiawei Zheng, Qianli Ma","doi":"10.1109/TNNLS.2023.3327416","DOIUrl":null,"url":null,"abstract":"<p><p>Multivariate time series (MTS) prediction has been studied broadly, which is widely applied in real-world applications. Recently, transformer-based methods have shown the potential in this task for their strong sequence modeling ability. Despite progress, these methods pay little attention to extracting short-term information in the context, while short-term patterns play an essential role in reflecting local temporal dynamics. Moreover, we argue that there are both consistent and specific characteristics among multiple variables, which should be fully considered for MTS modeling. To this end, we propose a multiresolution transformer (MR-Transformer) for MTS prediction, modeling MTS from both the temporal and the variable resolution. Specifically, for the temporal resolution, we design a long short-term transformer. We first split the sequence into nonoverlapping segments in an adaptive way and then extract short-term patterns within segments, while long-term patterns are captured by the inherent attention mechanism. Both of them are aggregated together to capture the temporal dependencies. For the variable resolution, besides the variable-consistent features learned by long short-term transformer, we also design a temporal convolution module to capture the specific features of each variable individually. MR-Transformer enhances the MTS modeling ability by combining multiresolution features between both time steps and variables. Extensive experiments conducted on real-world time series datasets show that MR-Transformer significantly outperforms the state-of-the-art MTS prediction models. The visualization analysis also demonstrates the effectiveness of the proposed model.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TNNLS.2023.3327416","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multivariate time series (MTS) prediction has been studied broadly, which is widely applied in real-world applications. Recently, transformer-based methods have shown the potential in this task for their strong sequence modeling ability. Despite progress, these methods pay little attention to extracting short-term information in the context, while short-term patterns play an essential role in reflecting local temporal dynamics. Moreover, we argue that there are both consistent and specific characteristics among multiple variables, which should be fully considered for MTS modeling. To this end, we propose a multiresolution transformer (MR-Transformer) for MTS prediction, modeling MTS from both the temporal and the variable resolution. Specifically, for the temporal resolution, we design a long short-term transformer. We first split the sequence into nonoverlapping segments in an adaptive way and then extract short-term patterns within segments, while long-term patterns are captured by the inherent attention mechanism. Both of them are aggregated together to capture the temporal dependencies. For the variable resolution, besides the variable-consistent features learned by long short-term transformer, we also design a temporal convolution module to capture the specific features of each variable individually. MR-Transformer enhances the MTS modeling ability by combining multiresolution features between both time steps and variables. Extensive experiments conducted on real-world time series datasets show that MR-Transformer significantly outperforms the state-of-the-art MTS prediction models. The visualization analysis also demonstrates the effectiveness of the proposed model.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.