An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms

IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xianchang Wang, Siyu Dong, Rui Zhang
{"title":"An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms","authors":"Xianchang Wang, Siyu Dong, Rui Zhang","doi":"10.3390/info14110610","DOIUrl":null,"url":null,"abstract":"In the prediction of time series, Empirical Mode Decomposition (EMD) generates subsequences and separates short-term tendencies from long-term ones. However, a single prediction model, including attention mechanism, has varying effects on each subsequence. To accurately capture the regularities of subsequences using an attention mechanism, we propose an integrated model for time series prediction based on signal decomposition and two attention mechanisms. This model combines the results of three networks—LSTM, LSTM-self-attention, and LSTM-temporal attention—all trained using subsequences obtained from EMD. Additionally, since previous research on EMD has been limited to single series analysis, this paper includes multiple series by employing two data pre-processing methods: ‘overall normalization’ and ‘respective normalization’. Experimental results on various datasets demonstrate that compared to models without attention mechanisms, temporal attention improves the prediction accuracy of short- and medium-term decomposed series by 15~28% and 45~72%, respectively; furthermore, it reduces the overall prediction error by 10~17%. The integrated model with temporal attention achieves a reduction in error of approximately 0.3%, primarily when compared to models utilizing only general forms of attention mechanisms. Moreover, after normalizing multiple series separately, the predictive performance is equivalent to that achieved for individual series.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"24 18","pages":"0"},"PeriodicalIF":2.4000,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information (Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info14110610","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

In the prediction of time series, Empirical Mode Decomposition (EMD) generates subsequences and separates short-term tendencies from long-term ones. However, a single prediction model, including attention mechanism, has varying effects on each subsequence. To accurately capture the regularities of subsequences using an attention mechanism, we propose an integrated model for time series prediction based on signal decomposition and two attention mechanisms. This model combines the results of three networks—LSTM, LSTM-self-attention, and LSTM-temporal attention—all trained using subsequences obtained from EMD. Additionally, since previous research on EMD has been limited to single series analysis, this paper includes multiple series by employing two data pre-processing methods: ‘overall normalization’ and ‘respective normalization’. Experimental results on various datasets demonstrate that compared to models without attention mechanisms, temporal attention improves the prediction accuracy of short- and medium-term decomposed series by 15~28% and 45~72%, respectively; furthermore, it reduces the overall prediction error by 10~17%. The integrated model with temporal attention achieves a reduction in error of approximately 0.3%, primarily when compared to models utilizing only general forms of attention mechanisms. Moreover, after normalizing multiple series separately, the predictive performance is equivalent to that achieved for individual series.
基于经验模态分解和两种注意机制的综合时间序列预测模型
在时间序列预测中,经验模态分解(EMD)产生子序列,将短期趋势与长期趋势分离。然而,单一的预测模型,包括注意机制,对每个子序列的影响是不同的。为了利用注意机制准确捕捉子序列的规律,提出了一种基于信号分解和两种注意机制的时间序列预测集成模型。该模型结合了lstm、lstm -自注意和lstm -时间注意三个网络的结果,它们都使用从EMD中获得的子序列进行训练。此外,由于以往对EMD的研究仅限于单序列分析,本文采用“整体归一化”和“各自归一化”两种数据预处理方法,将多序列纳入其中。在不同数据集上的实验结果表明,与不考虑注意机制的模型相比,时间注意对短期和中期分解序列的预测精度分别提高了15~28%和45~72%;此外,该方法可使总体预测误差降低10~17%。与仅使用一般形式的注意机制的模型相比,具有时间注意的集成模型实现了大约0.3%的误差减少。而且,对多个序列分别进行归一化后,其预测性能与对单个序列的预测性能相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information (Switzerland)
Information (Switzerland) Computer Science-Information Systems
CiteScore
6.90
自引率
0.00%
发文量
515
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信