{"title":"一种基于余弦距离滤波的聚合方法","authors":"Degang Wang, Yi Sun, Qi Gao, Fan Yang","doi":"10.1109/iip57348.2022.00031","DOIUrl":null,"url":null,"abstract":"Federated learning provides privacy protection for source data by exchanging model parameters or gradients. However, it still faces the problem of privacy disclosure. For example, membership inference attack aims to identify whether target data sample is used to train machine learning models in federated learning. Active membership inference attack takes advantage of the feature that attackers can participate in model training in federated learning, actively influence the model update to extract more information about the training set, which greatly increases the risk of model privacy disclosure. Aiming at the problem that the existing secure aggregation methods of federated learning cannot resist the active membership inference attack, DeMiaAgg, an aggregation method based on cosine distance filtering, is proposed. The cosine distance is used to quantify the deviation degree between clients’ gradient vector and global model parameter vector, and the malicious gradient vector is excluded from gradients aggregation to defense against the active membership inference attack. Experiments on the Texas 100 and Location30 datasets show that DeMiaAgg method is superior to the current advanced differential privacy and secure aggregation methods, and can reduce the accuracy of active membership inference attack to the level of passive attacks.","PeriodicalId":412907,"journal":{"name":"2022 4th International Conference on Intelligent Information Processing (IIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Aggregation Method based on Cosine Distance Filtering\",\"authors\":\"Degang Wang, Yi Sun, Qi Gao, Fan Yang\",\"doi\":\"10.1109/iip57348.2022.00031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning provides privacy protection for source data by exchanging model parameters or gradients. However, it still faces the problem of privacy disclosure. For example, membership inference attack aims to identify whether target data sample is used to train machine learning models in federated learning. Active membership inference attack takes advantage of the feature that attackers can participate in model training in federated learning, actively influence the model update to extract more information about the training set, which greatly increases the risk of model privacy disclosure. Aiming at the problem that the existing secure aggregation methods of federated learning cannot resist the active membership inference attack, DeMiaAgg, an aggregation method based on cosine distance filtering, is proposed. The cosine distance is used to quantify the deviation degree between clients’ gradient vector and global model parameter vector, and the malicious gradient vector is excluded from gradients aggregation to defense against the active membership inference attack. Experiments on the Texas 100 and Location30 datasets show that DeMiaAgg method is superior to the current advanced differential privacy and secure aggregation methods, and can reduce the accuracy of active membership inference attack to the level of passive attacks.\",\"PeriodicalId\":412907,\"journal\":{\"name\":\"2022 4th International Conference on Intelligent Information Processing (IIP)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on Intelligent Information Processing (IIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/iip57348.2022.00031\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Intelligent Information Processing (IIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iip57348.2022.00031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Aggregation Method based on Cosine Distance Filtering
Federated learning provides privacy protection for source data by exchanging model parameters or gradients. However, it still faces the problem of privacy disclosure. For example, membership inference attack aims to identify whether target data sample is used to train machine learning models in federated learning. Active membership inference attack takes advantage of the feature that attackers can participate in model training in federated learning, actively influence the model update to extract more information about the training set, which greatly increases the risk of model privacy disclosure. Aiming at the problem that the existing secure aggregation methods of federated learning cannot resist the active membership inference attack, DeMiaAgg, an aggregation method based on cosine distance filtering, is proposed. The cosine distance is used to quantify the deviation degree between clients’ gradient vector and global model parameter vector, and the malicious gradient vector is excluded from gradients aggregation to defense against the active membership inference attack. Experiments on the Texas 100 and Location30 datasets show that DeMiaAgg method is superior to the current advanced differential privacy and secure aggregation methods, and can reduce the accuracy of active membership inference attack to the level of passive attacks.