{"title":"A Personalized and Differentially Private Federated Learning for Anomaly Detection of Industrial Equipment","authors":"Zhen Zhang;Weishan Zhang;Zhicheng Bao;Yifan Miao;Yuru Liu;Yikang Zhao;Rui Zhang;Wenyin Zhu","doi":"10.1109/JRFID.2024.3390142","DOIUrl":null,"url":null,"abstract":"Federated learning is a distributed machine learning approach that achieves collaborative training while protecting data privacy. However, in distributed scenarios, the operational data of industrial equipment is dynamic and non-independently identically distributed (non-IID). This situation leads to poor performance of federated learning algorithms in industrial anomaly detection tasks. Personalized federated learning is a viable solution to the non-IID data problem, but it is not effective in responding to dynamic environmental changes. Implementing directed updates to the model, thereby effectively maintaining its stability, is one of the solutions for addressing dynamic challenges. In addition, even though federated learning has the ability to protect data privacy, it still has the risk of privacy leakage due to differential privacy attacks. In this paper, we propose a personalized federated learning based on hypernetwork and credible directed update of models to generate stable personalized models for clients with non-IID data in a dynamic environment. Furthermore, we propose a parameter-varying differential privacy mechanism to mitigate compromised differential attacks. We evaluate the capability of the proposed method to perform the anomaly detection task using real air conditioning datasets from three distinct factories. The results demonstrate that our framework outperforms existing personalized federated learning methods with an average accuracy improvement of 11.32%. Additionally, experimental results demonstrate that the framework can withstand differential attacks while maintaining high accuracy.","PeriodicalId":73291,"journal":{"name":"IEEE journal of radio frequency identification","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal of radio frequency identification","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10504551/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning is a distributed machine learning approach that achieves collaborative training while protecting data privacy. However, in distributed scenarios, the operational data of industrial equipment is dynamic and non-independently identically distributed (non-IID). This situation leads to poor performance of federated learning algorithms in industrial anomaly detection tasks. Personalized federated learning is a viable solution to the non-IID data problem, but it is not effective in responding to dynamic environmental changes. Implementing directed updates to the model, thereby effectively maintaining its stability, is one of the solutions for addressing dynamic challenges. In addition, even though federated learning has the ability to protect data privacy, it still has the risk of privacy leakage due to differential privacy attacks. In this paper, we propose a personalized federated learning based on hypernetwork and credible directed update of models to generate stable personalized models for clients with non-IID data in a dynamic environment. Furthermore, we propose a parameter-varying differential privacy mechanism to mitigate compromised differential attacks. We evaluate the capability of the proposed method to perform the anomaly detection task using real air conditioning datasets from three distinct factories. The results demonstrate that our framework outperforms existing personalized federated learning methods with an average accuracy improvement of 11.32%. Additionally, experimental results demonstrate that the framework can withstand differential attacks while maintaining high accuracy.