An Aggregation Method based on Cosine Distance Filtering

Degang Wang, Yi Sun, Qi Gao, Fan Yang
{"title":"An Aggregation Method based on Cosine Distance Filtering","authors":"Degang Wang, Yi Sun, Qi Gao, Fan Yang","doi":"10.1109/iip57348.2022.00031","DOIUrl":null,"url":null,"abstract":"Federated learning provides privacy protection for source data by exchanging model parameters or gradients. However, it still faces the problem of privacy disclosure. For example, membership inference attack aims to identify whether target data sample is used to train machine learning models in federated learning. Active membership inference attack takes advantage of the feature that attackers can participate in model training in federated learning, actively influence the model update to extract more information about the training set, which greatly increases the risk of model privacy disclosure. Aiming at the problem that the existing secure aggregation methods of federated learning cannot resist the active membership inference attack, DeMiaAgg, an aggregation method based on cosine distance filtering, is proposed. The cosine distance is used to quantify the deviation degree between clients’ gradient vector and global model parameter vector, and the malicious gradient vector is excluded from gradients aggregation to defense against the active membership inference attack. Experiments on the Texas 100 and Location30 datasets show that DeMiaAgg method is superior to the current advanced differential privacy and secure aggregation methods, and can reduce the accuracy of active membership inference attack to the level of passive attacks.","PeriodicalId":412907,"journal":{"name":"2022 4th International Conference on Intelligent Information Processing (IIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Intelligent Information Processing (IIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iip57348.2022.00031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning provides privacy protection for source data by exchanging model parameters or gradients. However, it still faces the problem of privacy disclosure. For example, membership inference attack aims to identify whether target data sample is used to train machine learning models in federated learning. Active membership inference attack takes advantage of the feature that attackers can participate in model training in federated learning, actively influence the model update to extract more information about the training set, which greatly increases the risk of model privacy disclosure. Aiming at the problem that the existing secure aggregation methods of federated learning cannot resist the active membership inference attack, DeMiaAgg, an aggregation method based on cosine distance filtering, is proposed. The cosine distance is used to quantify the deviation degree between clients’ gradient vector and global model parameter vector, and the malicious gradient vector is excluded from gradients aggregation to defense against the active membership inference attack. Experiments on the Texas 100 and Location30 datasets show that DeMiaAgg method is superior to the current advanced differential privacy and secure aggregation methods, and can reduce the accuracy of active membership inference attack to the level of passive attacks.
一种基于余弦距离滤波的聚合方法
联邦学习通过交换模型参数或梯度为源数据提供隐私保护。然而,它仍然面临着隐私泄露的问题。例如,隶属推理攻击旨在识别目标数据样本是否用于训练联邦学习中的机器学习模型。主动隶属推理攻击利用攻击者在联邦学习中可以参与模型训练的特点,主动影响模型更新以提取更多训练集的信息,这大大增加了模型隐私泄露的风险。针对现有联邦学习安全聚合方法无法抵抗主动隶属推理攻击的问题,提出了一种基于余弦距离滤波的聚合方法DeMiaAgg。利用余弦距离量化客户端梯度向量与全局模型参数向量的偏离程度,并将恶意梯度向量排除在梯度聚合之外,防御主动隶属度推理攻击。在Texas 100和Location30数据集上的实验表明,DeMiaAgg方法优于目前先进的差分隐私和安全聚合方法,并能将主动隶属推理攻击的准确率降低到被动攻击的水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信