{"title":"LGAT: A novel model for multivariate time series anomaly detection with improved anomaly transformer and learning graph structures","authors":"Mi Wen , ZheHui Chen , Yun Xiong , YiChuan Zhang","doi":"10.1016/j.neucom.2024.129024","DOIUrl":null,"url":null,"abstract":"<div><div>Time series anomaly detection involves identifying data points in continuously collected datasets that deviate from normal patterns. Given that real-world systems often consist of multiple variables, detecting anomalies in multivariate datasets has become a key focus of current research. This challenge has wide-ranging applications across various industries for system maintenance, such as in water treatment and distribution networks, transportation, and autonomous vehicles, thus driving active research in the field of time series anomaly detection. However, traditional methods primarily address this issue by predicting and reconstructing input time steps, but they still suffer from problems of overgeneralization and inconsistency in providing high performance for reasoning about complex dynamics. In response, we propose a novel unsupervised model called LGAT, which can automatically learn graph structures and leverage an enhanced Anomaly Transformer architecture to capture temporal dependencies. Moreover, the model features a new encoder–decoder architecture designed to enhance context extraction capabilities. In particular, the model calculates anomaly scores for multivariate time series anomaly detection by combining the reconstruction of input time series with the model’s computed prior associations and sequential correlations. This model captures inter-variable relationships and exhibit stronger context extraction abilities, making it more sensitive to anomaly detection. Extensive experiments on six common anomaly detection benchmarks further demonstrate the superiority of our approach over other state-of-the-art methods, with an improvement of approximately 1.2% across various metrics.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129024"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224017958","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Time series anomaly detection involves identifying data points in continuously collected datasets that deviate from normal patterns. Given that real-world systems often consist of multiple variables, detecting anomalies in multivariate datasets has become a key focus of current research. This challenge has wide-ranging applications across various industries for system maintenance, such as in water treatment and distribution networks, transportation, and autonomous vehicles, thus driving active research in the field of time series anomaly detection. However, traditional methods primarily address this issue by predicting and reconstructing input time steps, but they still suffer from problems of overgeneralization and inconsistency in providing high performance for reasoning about complex dynamics. In response, we propose a novel unsupervised model called LGAT, which can automatically learn graph structures and leverage an enhanced Anomaly Transformer architecture to capture temporal dependencies. Moreover, the model features a new encoder–decoder architecture designed to enhance context extraction capabilities. In particular, the model calculates anomaly scores for multivariate time series anomaly detection by combining the reconstruction of input time series with the model’s computed prior associations and sequential correlations. This model captures inter-variable relationships and exhibit stronger context extraction abilities, making it more sensitive to anomaly detection. Extensive experiments on six common anomaly detection benchmarks further demonstrate the superiority of our approach over other state-of-the-art methods, with an improvement of approximately 1.2% across various metrics.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.