{"title":"D2Vformer:基于时间位置嵌入的灵活时间序列预测模型","authors":"Xiaobao Song, Hao Wang, Liwei Deng, Yuxin He, Wenming Cao, Chi-Sing Leungc","doi":"arxiv-2409.11024","DOIUrl":null,"url":null,"abstract":"Time position embeddings capture the positional information of time steps,\noften serving as auxiliary inputs to enhance the predictive capabilities of\ntime series models. However, existing models exhibit limitations in capturing\nintricate time positional information and effectively utilizing these\nembeddings. To address these limitations, this paper proposes a novel model\ncalled D2Vformer. Unlike typical prediction methods that rely on RNNs or\nTransformers, this approach can directly handle scenarios where the predicted\nsequence is not adjacent to the input sequence or where its length dynamically\nchanges. In comparison to conventional methods, D2Vformer undoubtedly saves a\nsignificant amount of training resources. In D2Vformer, the Date2Vec module\nuses the timestamp information and feature sequences to generate time position\nembeddings. Afterward, D2Vformer introduces a new fusion block that utilizes an\nattention mechanism to explore the similarity in time positions between the\nembeddings of the input sequence and the predicted sequence, thereby generating\npredictions based on this similarity. Through extensive experiments on six\ndatasets, we demonstrate that Date2Vec outperforms other time position\nembedding methods, and D2Vformer surpasses state-of-the-art methods in both\nfixed-length and variable-length prediction tasks.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"D2Vformer: A Flexible Time Series Prediction Model Based on Time Position Embedding\",\"authors\":\"Xiaobao Song, Hao Wang, Liwei Deng, Yuxin He, Wenming Cao, Chi-Sing Leungc\",\"doi\":\"arxiv-2409.11024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Time position embeddings capture the positional information of time steps,\\noften serving as auxiliary inputs to enhance the predictive capabilities of\\ntime series models. However, existing models exhibit limitations in capturing\\nintricate time positional information and effectively utilizing these\\nembeddings. To address these limitations, this paper proposes a novel model\\ncalled D2Vformer. Unlike typical prediction methods that rely on RNNs or\\nTransformers, this approach can directly handle scenarios where the predicted\\nsequence is not adjacent to the input sequence or where its length dynamically\\nchanges. In comparison to conventional methods, D2Vformer undoubtedly saves a\\nsignificant amount of training resources. In D2Vformer, the Date2Vec module\\nuses the timestamp information and feature sequences to generate time position\\nembeddings. Afterward, D2Vformer introduces a new fusion block that utilizes an\\nattention mechanism to explore the similarity in time positions between the\\nembeddings of the input sequence and the predicted sequence, thereby generating\\npredictions based on this similarity. Through extensive experiments on six\\ndatasets, we demonstrate that Date2Vec outperforms other time position\\nembedding methods, and D2Vformer surpasses state-of-the-art methods in both\\nfixed-length and variable-length prediction tasks.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
D2Vformer: A Flexible Time Series Prediction Model Based on Time Position Embedding
Time position embeddings capture the positional information of time steps,
often serving as auxiliary inputs to enhance the predictive capabilities of
time series models. However, existing models exhibit limitations in capturing
intricate time positional information and effectively utilizing these
embeddings. To address these limitations, this paper proposes a novel model
called D2Vformer. Unlike typical prediction methods that rely on RNNs or
Transformers, this approach can directly handle scenarios where the predicted
sequence is not adjacent to the input sequence or where its length dynamically
changes. In comparison to conventional methods, D2Vformer undoubtedly saves a
significant amount of training resources. In D2Vformer, the Date2Vec module
uses the timestamp information and feature sequences to generate time position
embeddings. Afterward, D2Vformer introduces a new fusion block that utilizes an
attention mechanism to explore the similarity in time positions between the
embeddings of the input sequence and the predicted sequence, thereby generating
predictions based on this similarity. Through extensive experiments on six
datasets, we demonstrate that Date2Vec outperforms other time position
embedding methods, and D2Vformer surpasses state-of-the-art methods in both
fixed-length and variable-length prediction tasks.