Model-Agnostic Causal Principle for Unbiased KPI Anomaly Detection

Jiemin Ji, D. Guan, Yuwen Deng, Weiwei Yuan
{"title":"Model-Agnostic Causal Principle for Unbiased KPI Anomaly Detection","authors":"Jiemin Ji, D. Guan, Yuwen Deng, Weiwei Yuan","doi":"10.1109/IJCNN55064.2022.9892664","DOIUrl":null,"url":null,"abstract":"KPI anomaly detection plays an important role in operation and maintenance. Due to incomplete or missing labels are common, methods based on VAE (i.e., Variational Auto-Encoder) is widely used. These methods assume that the normal patterns, which is in majority, will be learned, but this assumption is not easy to satisfy since abnormal patterns are inevitably embedded. Existing debias methods merely utilize anomalous labels to eliminate bias in the decoding process, but latent representation generated by the encoder could still be biased and even ill-defined when input KPIs are too abnormal. We propose a model-agnostic causal principle to make the above VAE-based models unbiased. When modifying ELBO (i.e., evidence of lower bound) to utilize anomalous labels, our causal principle indicates that the anomalous labels are confounders between training data and learned representations, leading to the aforementioned bias. Our principle also implements a do-operation to cut off the causal path from anomaly labels to training data. Through do-operation, we can eliminate the anomaly bias in the encoder and reconstruct normal patterns more frequently in the decoder. Our proposed causal improvement on existing VAE-based models, CausalDonut and CausalBagel, improve F1-score up to 5% compared to Donut and Bagel as well as surpassing state-of-the-art supervised and unsupervised models. To empirically prove the debias capability of our method, we also provide a comparison of anomaly scores between the baselines and our models. In addition, the learning process of our principle is interpreted from an entropy perspective.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9892664","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

KPI anomaly detection plays an important role in operation and maintenance. Due to incomplete or missing labels are common, methods based on VAE (i.e., Variational Auto-Encoder) is widely used. These methods assume that the normal patterns, which is in majority, will be learned, but this assumption is not easy to satisfy since abnormal patterns are inevitably embedded. Existing debias methods merely utilize anomalous labels to eliminate bias in the decoding process, but latent representation generated by the encoder could still be biased and even ill-defined when input KPIs are too abnormal. We propose a model-agnostic causal principle to make the above VAE-based models unbiased. When modifying ELBO (i.e., evidence of lower bound) to utilize anomalous labels, our causal principle indicates that the anomalous labels are confounders between training data and learned representations, leading to the aforementioned bias. Our principle also implements a do-operation to cut off the causal path from anomaly labels to training data. Through do-operation, we can eliminate the anomaly bias in the encoder and reconstruct normal patterns more frequently in the decoder. Our proposed causal improvement on existing VAE-based models, CausalDonut and CausalBagel, improve F1-score up to 5% compared to Donut and Bagel as well as surpassing state-of-the-art supervised and unsupervised models. To empirically prove the debias capability of our method, we also provide a comparison of anomaly scores between the baselines and our models. In addition, the learning process of our principle is interpreted from an entropy perspective.
无偏KPI异常检测的模型不可知因果原理
KPI异常检测在运维中起着重要的作用。由于标签不完整或缺失是常见的,基于VAE(即变分自编码器)的方法被广泛使用。这些方法假设正常模式(占大多数)将被学习,但这种假设不容易满足,因为异常模式不可避免地被嵌入。现有的去偏方法仅仅是利用异常标签来消除解码过程中的偏差,但当输入kpi过于异常时,编码器产生的潜在表征仍然可能存在偏差甚至不定义。我们提出了一个模型不可知的因果原则,以使上述基于模型的模型无偏。当修改ELBO(即下界证据)以利用异常标签时,我们的因果原则表明,异常标签是训练数据和学习表征之间的混杂因素,导致上述偏差。我们的原理还实现了一个do-operation来切断从异常标签到训练数据的因果路径。通过do-operation,我们可以消除编码器中的异常偏置,并在解码器中更频繁地重建正常模式。我们提出了对现有基于ae的模型CausalDonut和CausalBagel的因果改进,与Donut和Bagel相比,f1得分提高了5%,并且超过了最先进的监督和无监督模型。为了从经验上证明我们的方法的去偏能力,我们还提供了基线和我们的模型之间的异常分数的比较。此外,从熵的角度解释了我们原理的学习过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信