{"title":"Beyond Model-Level Membership Privacy Leakage: an Adversarial Approach in Federated Learning","authors":"Jiale Chen, Jiale Zhang, Yanchao Zhao, Hao Han, Kun Zhu, Bing Chen","doi":"10.1109/ICCCN49398.2020.9209744","DOIUrl":null,"url":null,"abstract":"With the rise of privacy concerns in traditional centralized machine learning services, the federated learning, which incorporates multiple participants to train a global model across their localized training data, has lately received signifi-cant attention in both industry and academia. However, recent researches reveal the inherent vulnerabilities of the federated learning for the membership inference attacks that the adversary could infer whether a given data record belongs to the model’s training set. Although the state-of-the-art techniques could successfully deduce the membership information from the centralized machine learning models, it is still challenging to infer the membership to a more confined level, user-level. In this paper, We propose a novel user-level inference attack mechanism in federated learning. Specifically, we first give a comprehensive analysis of active and targeted membership inference attacks in the context of the federated learning. Then, by considering a more complicated scenario that the adversary can only passively observe the updating models from different iterations, we incorporate the generative adversarial networks into our method, which can enrich the training set for the final membership inference model. The extensive experimental results demonstrate the effectiveness of our proposed attacking approach in the case of single-label and multi-label.","PeriodicalId":137835,"journal":{"name":"2020 29th International Conference on Computer Communications and Networks (ICCCN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th International Conference on Computer Communications and Networks (ICCCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCN49398.2020.9209744","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 24
Abstract
With the rise of privacy concerns in traditional centralized machine learning services, the federated learning, which incorporates multiple participants to train a global model across their localized training data, has lately received signifi-cant attention in both industry and academia. However, recent researches reveal the inherent vulnerabilities of the federated learning for the membership inference attacks that the adversary could infer whether a given data record belongs to the model’s training set. Although the state-of-the-art techniques could successfully deduce the membership information from the centralized machine learning models, it is still challenging to infer the membership to a more confined level, user-level. In this paper, We propose a novel user-level inference attack mechanism in federated learning. Specifically, we first give a comprehensive analysis of active and targeted membership inference attacks in the context of the federated learning. Then, by considering a more complicated scenario that the adversary can only passively observe the updating models from different iterations, we incorporate the generative adversarial networks into our method, which can enrich the training set for the final membership inference model. The extensive experimental results demonstrate the effectiveness of our proposed attacking approach in the case of single-label and multi-label.