A new approach for cross-silo federated learning and its privacy risks

Michele Fontana, Francesca Naretto, A. Monreale
{"title":"A new approach for cross-silo federated learning and its privacy risks","authors":"Michele Fontana, Francesca Naretto, A. Monreale","doi":"10.1109/PST52912.2021.9647753","DOIUrl":null,"url":null,"abstract":"Federated Learning has witnessed an increasing popularity in the past few years for its ability to train Machine Learning models in critical contexts, using private data without moving them. Most of the approaches in the literature are focused on mobile environments, where mobile devices contain the data of single users, and typically deal with images or text data. In this paper, we define HOLDA, a novel federated learning approach tailored for training machine learning models on data distributed over federated organizations hierarchically organized. Our method focuses on the generalization capabilities of the neural network models, providing a new mechanism for selecting their best weights. In addition, it is tailored for tabular data. We empirically test the performance of our approach on two different tabular datasets, showing excellent results in terms of performance and generalization capabilities. Then, we also tackle the problem of assessing the privacy risk of users represented in the training data. In particular, we empirically show, by attacking the HOLDA models with the Membership Inference Attack, that the privacy of the users in the training data may have high risk.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 18th International Conference on Privacy, Security and Trust (PST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PST52912.2021.9647753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Federated Learning has witnessed an increasing popularity in the past few years for its ability to train Machine Learning models in critical contexts, using private data without moving them. Most of the approaches in the literature are focused on mobile environments, where mobile devices contain the data of single users, and typically deal with images or text data. In this paper, we define HOLDA, a novel federated learning approach tailored for training machine learning models on data distributed over federated organizations hierarchically organized. Our method focuses on the generalization capabilities of the neural network models, providing a new mechanism for selecting their best weights. In addition, it is tailored for tabular data. We empirically test the performance of our approach on two different tabular datasets, showing excellent results in terms of performance and generalization capabilities. Then, we also tackle the problem of assessing the privacy risk of users represented in the training data. In particular, we empirically show, by attacking the HOLDA models with the Membership Inference Attack, that the privacy of the users in the training data may have high risk.
跨竖井联合学习的新方法及其隐私风险
联邦学习在过去几年中越来越受欢迎,因为它能够在关键环境中训练机器学习模型,使用私有数据而不移动它们。文献中的大多数方法都集中在移动环境中,其中移动设备包含单个用户的数据,并且通常处理图像或文本数据。在本文中,我们定义了HOLDA,这是一种新的联邦学习方法,专门用于在分层组织的联邦组织上分布的数据上训练机器学习模型。我们的方法侧重于神经网络模型的泛化能力,提供了一种选择最佳权值的新机制。此外,它还针对表格数据进行了定制。我们在两个不同的表格数据集上对我们的方法的性能进行了实证测试,在性能和泛化能力方面显示了出色的结果。然后,我们还解决了评估训练数据中所代表用户的隐私风险的问题。特别是,我们通过经验证明,通过使用隶属度推理攻击攻击HOLDA模型,训练数据中用户的隐私可能存在高风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信