{"title":"无偏KPI异常检测的模型不可知因果原理","authors":"Jiemin Ji, D. Guan, Yuwen Deng, Weiwei Yuan","doi":"10.1109/IJCNN55064.2022.9892664","DOIUrl":null,"url":null,"abstract":"KPI anomaly detection plays an important role in operation and maintenance. Due to incomplete or missing labels are common, methods based on VAE (i.e., Variational Auto-Encoder) is widely used. These methods assume that the normal patterns, which is in majority, will be learned, but this assumption is not easy to satisfy since abnormal patterns are inevitably embedded. Existing debias methods merely utilize anomalous labels to eliminate bias in the decoding process, but latent representation generated by the encoder could still be biased and even ill-defined when input KPIs are too abnormal. We propose a model-agnostic causal principle to make the above VAE-based models unbiased. When modifying ELBO (i.e., evidence of lower bound) to utilize anomalous labels, our causal principle indicates that the anomalous labels are confounders between training data and learned representations, leading to the aforementioned bias. Our principle also implements a do-operation to cut off the causal path from anomaly labels to training data. Through do-operation, we can eliminate the anomaly bias in the encoder and reconstruct normal patterns more frequently in the decoder. Our proposed causal improvement on existing VAE-based models, CausalDonut and CausalBagel, improve F1-score up to 5% compared to Donut and Bagel as well as surpassing state-of-the-art supervised and unsupervised models. To empirically prove the debias capability of our method, we also provide a comparison of anomaly scores between the baselines and our models. In addition, the learning process of our principle is interpreted from an entropy perspective.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Model-Agnostic Causal Principle for Unbiased KPI Anomaly Detection\",\"authors\":\"Jiemin Ji, D. Guan, Yuwen Deng, Weiwei Yuan\",\"doi\":\"10.1109/IJCNN55064.2022.9892664\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"KPI anomaly detection plays an important role in operation and maintenance. Due to incomplete or missing labels are common, methods based on VAE (i.e., Variational Auto-Encoder) is widely used. These methods assume that the normal patterns, which is in majority, will be learned, but this assumption is not easy to satisfy since abnormal patterns are inevitably embedded. Existing debias methods merely utilize anomalous labels to eliminate bias in the decoding process, but latent representation generated by the encoder could still be biased and even ill-defined when input KPIs are too abnormal. We propose a model-agnostic causal principle to make the above VAE-based models unbiased. When modifying ELBO (i.e., evidence of lower bound) to utilize anomalous labels, our causal principle indicates that the anomalous labels are confounders between training data and learned representations, leading to the aforementioned bias. Our principle also implements a do-operation to cut off the causal path from anomaly labels to training data. Through do-operation, we can eliminate the anomaly bias in the encoder and reconstruct normal patterns more frequently in the decoder. Our proposed causal improvement on existing VAE-based models, CausalDonut and CausalBagel, improve F1-score up to 5% compared to Donut and Bagel as well as surpassing state-of-the-art supervised and unsupervised models. To empirically prove the debias capability of our method, we also provide a comparison of anomaly scores between the baselines and our models. In addition, the learning process of our principle is interpreted from an entropy perspective.\",\"PeriodicalId\":106974,\"journal\":{\"name\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"121 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN55064.2022.9892664\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9892664","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Model-Agnostic Causal Principle for Unbiased KPI Anomaly Detection
KPI anomaly detection plays an important role in operation and maintenance. Due to incomplete or missing labels are common, methods based on VAE (i.e., Variational Auto-Encoder) is widely used. These methods assume that the normal patterns, which is in majority, will be learned, but this assumption is not easy to satisfy since abnormal patterns are inevitably embedded. Existing debias methods merely utilize anomalous labels to eliminate bias in the decoding process, but latent representation generated by the encoder could still be biased and even ill-defined when input KPIs are too abnormal. We propose a model-agnostic causal principle to make the above VAE-based models unbiased. When modifying ELBO (i.e., evidence of lower bound) to utilize anomalous labels, our causal principle indicates that the anomalous labels are confounders between training data and learned representations, leading to the aforementioned bias. Our principle also implements a do-operation to cut off the causal path from anomaly labels to training data. Through do-operation, we can eliminate the anomaly bias in the encoder and reconstruct normal patterns more frequently in the decoder. Our proposed causal improvement on existing VAE-based models, CausalDonut and CausalBagel, improve F1-score up to 5% compared to Donut and Bagel as well as surpassing state-of-the-art supervised and unsupervised models. To empirically prove the debias capability of our method, we also provide a comparison of anomaly scores between the baselines and our models. In addition, the learning process of our principle is interpreted from an entropy perspective.