TS-MAE:用于时间序列表示学习的掩码自动编码器

IF 8.1 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS
Qian Liu , Junchen Ye , Haohan Liang , Leilei Sun , Bowen Du
{"title":"TS-MAE:用于时间序列表示学习的掩码自动编码器","authors":"Qian Liu ,&nbsp;Junchen Ye ,&nbsp;Haohan Liang ,&nbsp;Leilei Sun ,&nbsp;Bowen Du","doi":"10.1016/j.ins.2024.121576","DOIUrl":null,"url":null,"abstract":"<div><div>Self-supervised learning (SSL) has been widely researched in recent years. In Particular, generative self-supervised learning methods have achieved remarkable success in many AI domains, such as MAE in computer vision, well-known BERT, GPT in natural language processing, and GraphMAE in graph learning. However, in the context of time series analysis, not only is the work that follows this line limited but also the performance has not reached the potential as promised in other fields. To fill this gap, we propose a simple and elegant masked autoencoder for time series representation learning. Firstly, unlike most existing work which uses the Transformer as the backbone, we build our model based on neural ordinary differential equation which possesses excellent mathematical properties. Compared with the position encoding in Transformer, modeling the evolution patterns continuously could better extract the temporal dependency. Secondly, a timestamp-wise mask strategy is provided to cooperate with the autoencoder to avoid bias, and it also could reduce the cross-imputation between variables to learn more robust representations. Lastly, extensive experiments conducted on two classical tasks demonstrate the superiority of our model over the state-of-the-art ones.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"690 ","pages":"Article 121576"},"PeriodicalIF":8.1000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TS-MAE: A masked autoencoder for time series representation learning\",\"authors\":\"Qian Liu ,&nbsp;Junchen Ye ,&nbsp;Haohan Liang ,&nbsp;Leilei Sun ,&nbsp;Bowen Du\",\"doi\":\"10.1016/j.ins.2024.121576\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Self-supervised learning (SSL) has been widely researched in recent years. In Particular, generative self-supervised learning methods have achieved remarkable success in many AI domains, such as MAE in computer vision, well-known BERT, GPT in natural language processing, and GraphMAE in graph learning. However, in the context of time series analysis, not only is the work that follows this line limited but also the performance has not reached the potential as promised in other fields. To fill this gap, we propose a simple and elegant masked autoencoder for time series representation learning. Firstly, unlike most existing work which uses the Transformer as the backbone, we build our model based on neural ordinary differential equation which possesses excellent mathematical properties. Compared with the position encoding in Transformer, modeling the evolution patterns continuously could better extract the temporal dependency. Secondly, a timestamp-wise mask strategy is provided to cooperate with the autoencoder to avoid bias, and it also could reduce the cross-imputation between variables to learn more robust representations. Lastly, extensive experiments conducted on two classical tasks demonstrate the superiority of our model over the state-of-the-art ones.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":\"690 \",\"pages\":\"Article 121576\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0020025524014907\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020025524014907","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

近年来,自监督学习(SSL)得到了广泛的研究。其中,生成式自监督学习方法在许多人工智能领域都取得了显著的成就,如计算机视觉领域的 MAE、众所周知的 BERT、自然语言处理领域的 GPT 以及图学习领域的 GraphMAE 等。然而,在时间序列分析中,沿袭这一思路的工作不仅有限,而且其性能也没有达到其他领域所承诺的潜力。为了填补这一空白,我们提出了一种用于时间序列表示学习的简单而优雅的掩码自动编码器。首先,与大多数以变换器为骨干的现有研究不同,我们的模型是基于神经常微分方程建立的,而神经常微分方程具有优异的数学特性。与变换器中的位置编码相比,连续演化模式建模能更好地提取时间依赖性。其次,提供了一种时间戳掩码策略来配合自动编码器,以避免偏差,同时还能减少变量间的交叉输入,从而学习到更健壮的表征。最后,在两个经典任务中进行的大量实验证明了我们的模型优于最先进的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TS-MAE: A masked autoencoder for time series representation learning
Self-supervised learning (SSL) has been widely researched in recent years. In Particular, generative self-supervised learning methods have achieved remarkable success in many AI domains, such as MAE in computer vision, well-known BERT, GPT in natural language processing, and GraphMAE in graph learning. However, in the context of time series analysis, not only is the work that follows this line limited but also the performance has not reached the potential as promised in other fields. To fill this gap, we propose a simple and elegant masked autoencoder for time series representation learning. Firstly, unlike most existing work which uses the Transformer as the backbone, we build our model based on neural ordinary differential equation which possesses excellent mathematical properties. Compared with the position encoding in Transformer, modeling the evolution patterns continuously could better extract the temporal dependency. Secondly, a timestamp-wise mask strategy is provided to cooperate with the autoencoder to avoid bias, and it also could reduce the cross-imputation between variables to learn more robust representations. Lastly, extensive experiments conducted on two classical tasks demonstrate the superiority of our model over the state-of-the-art ones.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Sciences
Information Sciences 工程技术-计算机:信息系统
CiteScore
14.00
自引率
17.30%
发文量
1322
审稿时长
10.4 months
期刊介绍: Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions. Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信