Louis Jensen, Jayme Fosa, Ben Teitelbaum, Peter Chin
{"title":"How Dense Autoencoders can still Achieve the State-of-the-art in Time-Series Anomaly Detection","authors":"Louis Jensen, Jayme Fosa, Ben Teitelbaum, Peter Chin","doi":"10.1109/ICMLA52953.2021.00207","DOIUrl":null,"url":null,"abstract":"Time series data has become ubiquitous in the modern era of data collection. With the increase of these time series data streams, the demand for automatic time series anomaly detection has also increased. Automatic monitoring of data allows engineers to investigate only unusual behavior in their data streams. Despite this increase in demand for automatic time series anomaly detection, many popular methods fail to offer a general purpose solution. Some demand expensive labelling of anomalies, others require the data to follow certain assumed patterns, some have long and unstable training, and many suffer from high rates of false alarms. In this paper we demonstrate that simpler is often better, showing that a fully unsupervised multilayer perceptron autoencoder is able to outperform much more complicated models with only a few critical improvements. We offer improvements to help distinguish anomalous subsequences near to each other, and to distinguish anomalies even in the midst of changing distributions of data. We compare our model with state-of-the-art competitors on benchmark datasets sourced from NASA, Yahoo, and Numenta, achieving improvements beyond competitive models in all three datasets.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"1 1","pages":"1272-1277"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA52953.2021.00207","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Time series data has become ubiquitous in the modern era of data collection. With the increase of these time series data streams, the demand for automatic time series anomaly detection has also increased. Automatic monitoring of data allows engineers to investigate only unusual behavior in their data streams. Despite this increase in demand for automatic time series anomaly detection, many popular methods fail to offer a general purpose solution. Some demand expensive labelling of anomalies, others require the data to follow certain assumed patterns, some have long and unstable training, and many suffer from high rates of false alarms. In this paper we demonstrate that simpler is often better, showing that a fully unsupervised multilayer perceptron autoencoder is able to outperform much more complicated models with only a few critical improvements. We offer improvements to help distinguish anomalous subsequences near to each other, and to distinguish anomalies even in the midst of changing distributions of data. We compare our model with state-of-the-art competitors on benchmark datasets sourced from NASA, Yahoo, and Numenta, achieving improvements beyond competitive models in all three datasets.