{"title":"集成编解码器分解变压器长期序列预测","authors":"Benhan Li , Wei Zhang , Mingxin Lu","doi":"10.1016/j.neunet.2025.107484","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, Transformer-based and multilayer perceptron (MLP) based architectures have formed a competitive landscape in the field of time series forecasting. There is evidence that series decomposition can further enhance the model’s ability to perceive temporal patterns. However, most of the existing Transformer-based decomposed models capture seasonal features progressively and assist in adding trends for forecasting, but ignore the deep information contained in trends and may lead to pattern mismatch in the fusion stage. In addition, the permutation invariance of the attention mechanism inevitably leads to the loss of temporal order. After in-depth analysis of the applicability of attention and linear layers to series components, we propose to use attention to learn multivariate correlations from trends, and MLP to capture seasonal patterns. We further introduce an integrated codec that provides the same multivariate relationship representation for both the encoding and decoding stages, ensuring effective inheritance of temporal dependencies. To mitigate the fading of sequentiality during attention, we propose trend enhancement module, which maintains the stability of the trend by expanding the series to a longer time scale, helping the attention mechanism to achieve fine-grained feature representations. Extensive experiments show that our model exhibits state-of-the-art prediction performance on large-scale datasets.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107484"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Integrated codec decomposed Transformer for long-term series forecasting\",\"authors\":\"Benhan Li , Wei Zhang , Mingxin Lu\",\"doi\":\"10.1016/j.neunet.2025.107484\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recently, Transformer-based and multilayer perceptron (MLP) based architectures have formed a competitive landscape in the field of time series forecasting. There is evidence that series decomposition can further enhance the model’s ability to perceive temporal patterns. However, most of the existing Transformer-based decomposed models capture seasonal features progressively and assist in adding trends for forecasting, but ignore the deep information contained in trends and may lead to pattern mismatch in the fusion stage. In addition, the permutation invariance of the attention mechanism inevitably leads to the loss of temporal order. After in-depth analysis of the applicability of attention and linear layers to series components, we propose to use attention to learn multivariate correlations from trends, and MLP to capture seasonal patterns. We further introduce an integrated codec that provides the same multivariate relationship representation for both the encoding and decoding stages, ensuring effective inheritance of temporal dependencies. To mitigate the fading of sequentiality during attention, we propose trend enhancement module, which maintains the stability of the trend by expanding the series to a longer time scale, helping the attention mechanism to achieve fine-grained feature representations. Extensive experiments show that our model exhibits state-of-the-art prediction performance on large-scale datasets.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"188 \",\"pages\":\"Article 107484\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025003636\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003636","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Integrated codec decomposed Transformer for long-term series forecasting
Recently, Transformer-based and multilayer perceptron (MLP) based architectures have formed a competitive landscape in the field of time series forecasting. There is evidence that series decomposition can further enhance the model’s ability to perceive temporal patterns. However, most of the existing Transformer-based decomposed models capture seasonal features progressively and assist in adding trends for forecasting, but ignore the deep information contained in trends and may lead to pattern mismatch in the fusion stage. In addition, the permutation invariance of the attention mechanism inevitably leads to the loss of temporal order. After in-depth analysis of the applicability of attention and linear layers to series components, we propose to use attention to learn multivariate correlations from trends, and MLP to capture seasonal patterns. We further introduce an integrated codec that provides the same multivariate relationship representation for both the encoding and decoding stages, ensuring effective inheritance of temporal dependencies. To mitigate the fading of sequentiality during attention, we propose trend enhancement module, which maintains the stability of the trend by expanding the series to a longer time scale, helping the attention mechanism to achieve fine-grained feature representations. Extensive experiments show that our model exhibits state-of-the-art prediction performance on large-scale datasets.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.