{"title":"基于变分自编码器的多变量时间序列数据异常感知","authors":"Chang Li, Yeo Chai Kiat, Jiwu Jing, Chun Long","doi":"10.1111/exsy.70078","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Anomaly perception in multivariate time series data has crucial applications in various domains such as industrial control and intrusion detection. In real-world scenarios, the sequence information in multivariate time series data, which encompasses the temporal order and dependencies among high-dimensional samples and features, can be complex and nonlinear. Additionally, the time series data often exhibit high volatility and are interspersed with noise data. These factors make anomaly perception in multivariate time series challenging. Despite the recent development of deep learning methods, only a few are able to address all of these challenges. In this paper, we propose a Transformer-based Variational AutoEncoder (T-VAE) for anomaly perception in multivariate time series data. The T-VAE consists of two sub-networks, the Representation Network and the Memory Network, and achieves end-to-end jointly optimisation. The Representation Network leverages self-attention mechanisms and residual network structures to capture sequence information and metaphorical patterns from multivariate time series data. The Memory Network employs a Variational AutoEncoder to learn the distribution of normal data. It employs Maximum Mean Discrepancy to approximate the distribution of high-volatility and noisy data to the distribution of the normal data. We evaluate T-VAE on five datasets, showing superior performance and validating its effectiveness and robustness through comprehensive ablation studies and sensitivity analyses.</p>\n </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"42 7","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"T-VAE: Transformer-Based Variational AutoEncoder for Perceiving Anomalies in Multivariate Time Series Data\",\"authors\":\"Chang Li, Yeo Chai Kiat, Jiwu Jing, Chun Long\",\"doi\":\"10.1111/exsy.70078\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Anomaly perception in multivariate time series data has crucial applications in various domains such as industrial control and intrusion detection. In real-world scenarios, the sequence information in multivariate time series data, which encompasses the temporal order and dependencies among high-dimensional samples and features, can be complex and nonlinear. Additionally, the time series data often exhibit high volatility and are interspersed with noise data. These factors make anomaly perception in multivariate time series challenging. Despite the recent development of deep learning methods, only a few are able to address all of these challenges. In this paper, we propose a Transformer-based Variational AutoEncoder (T-VAE) for anomaly perception in multivariate time series data. The T-VAE consists of two sub-networks, the Representation Network and the Memory Network, and achieves end-to-end jointly optimisation. The Representation Network leverages self-attention mechanisms and residual network structures to capture sequence information and metaphorical patterns from multivariate time series data. The Memory Network employs a Variational AutoEncoder to learn the distribution of normal data. It employs Maximum Mean Discrepancy to approximate the distribution of high-volatility and noisy data to the distribution of the normal data. We evaluate T-VAE on five datasets, showing superior performance and validating its effectiveness and robustness through comprehensive ablation studies and sensitivity analyses.</p>\\n </div>\",\"PeriodicalId\":51053,\"journal\":{\"name\":\"Expert Systems\",\"volume\":\"42 7\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/exsy.70078\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.70078","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
T-VAE: Transformer-Based Variational AutoEncoder for Perceiving Anomalies in Multivariate Time Series Data
Anomaly perception in multivariate time series data has crucial applications in various domains such as industrial control and intrusion detection. In real-world scenarios, the sequence information in multivariate time series data, which encompasses the temporal order and dependencies among high-dimensional samples and features, can be complex and nonlinear. Additionally, the time series data often exhibit high volatility and are interspersed with noise data. These factors make anomaly perception in multivariate time series challenging. Despite the recent development of deep learning methods, only a few are able to address all of these challenges. In this paper, we propose a Transformer-based Variational AutoEncoder (T-VAE) for anomaly perception in multivariate time series data. The T-VAE consists of two sub-networks, the Representation Network and the Memory Network, and achieves end-to-end jointly optimisation. The Representation Network leverages self-attention mechanisms and residual network structures to capture sequence information and metaphorical patterns from multivariate time series data. The Memory Network employs a Variational AutoEncoder to learn the distribution of normal data. It employs Maximum Mean Discrepancy to approximate the distribution of high-volatility and noisy data to the distribution of the normal data. We evaluate T-VAE on five datasets, showing superior performance and validating its effectiveness and robustness through comprehensive ablation studies and sensitivity analyses.
期刊介绍:
Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper.
As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.