{"title":"Mixup Training for Generative Models to Defend Membership Inference Attacks","authors":"Zhe Ji, Qiansiqi Hu, Liyao Xiang, Chenghu Zhou","doi":"10.1109/INFOCOM53939.2023.10229036","DOIUrl":null,"url":null,"abstract":"With the popularity of machine learning, it has been a growing concern on the trained model revealing the private information of the training data. Membership inference attack (MIA) poses one of the threats by inferring whether a given sample participates in the training of the target model. Although MIA has been widely studied for discriminative models, for generative models, neither it nor its defense is extensively investigated. In this work, we propose a mixup training method for generative adversarial networks (GANs) as a defense against MIAs. Specifically, the original training data is replaced with their interpolations so that GANs would never overfit the original data. The intriguing part is an analysis from the hypothesis test perspective to theoretically prove our method could mitigate the AUC of the strongest likelihood ratio attack. Experimental results support that mixup training successfully defends the state-of-the-art MIAs for generative models, yet without model performance degradation or any additional training efforts, showing great promise to be deployed in practice.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM53939.2023.10229036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the popularity of machine learning, it has been a growing concern on the trained model revealing the private information of the training data. Membership inference attack (MIA) poses one of the threats by inferring whether a given sample participates in the training of the target model. Although MIA has been widely studied for discriminative models, for generative models, neither it nor its defense is extensively investigated. In this work, we propose a mixup training method for generative adversarial networks (GANs) as a defense against MIAs. Specifically, the original training data is replaced with their interpolations so that GANs would never overfit the original data. The intriguing part is an analysis from the hypothesis test perspective to theoretically prove our method could mitigate the AUC of the strongest likelihood ratio attack. Experimental results support that mixup training successfully defends the state-of-the-art MIAs for generative models, yet without model performance degradation or any additional training efforts, showing great promise to be deployed in practice.