{"title":"Adversarial Attacks on Multi-Level Fault Detection and Diagnosis Systems","authors":"Akram S. Awad, Ismail R. Alkhouri, George K. Atia","doi":"10.1109/mlsp52302.2021.9596378","DOIUrl":null,"url":null,"abstract":"Building automation systems are susceptible to malicious attacks, causing erroneous Fault Detection and Diagnosis (FDD). In this paper, we aim at examining the robustness of a Hierarchical Fault Detection and Diagnosis (HFDD) model, which uses multiple levels for detection and diagnosis, to adversarial perturbation attacks. We formulate convex programs to generate small perturbations targeting different levels of the HFDD model. We show that the HFDD model is harder to fool than the single level classifier and that attacking a certain level can be achieved with negligible effect on the higher level accuracy. We perform a case study of said attacks on the HFDD model using experimental data from faulty Air Handling Units. Performance is evaluated based on the reduction in classification accuracy, robustness of the higher level accuracy, and imperceptibility of the attack.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/mlsp52302.2021.9596378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Building automation systems are susceptible to malicious attacks, causing erroneous Fault Detection and Diagnosis (FDD). In this paper, we aim at examining the robustness of a Hierarchical Fault Detection and Diagnosis (HFDD) model, which uses multiple levels for detection and diagnosis, to adversarial perturbation attacks. We formulate convex programs to generate small perturbations targeting different levels of the HFDD model. We show that the HFDD model is harder to fool than the single level classifier and that attacking a certain level can be achieved with negligible effect on the higher level accuracy. We perform a case study of said attacks on the HFDD model using experimental data from faulty Air Handling Units. Performance is evaluated based on the reduction in classification accuracy, robustness of the higher level accuracy, and imperceptibility of the attack.