Model Segmentation for Storage Efficient Private Federated Learning with Top $r$ Sparsification

Sajani Vithana, S. Ulukus
{"title":"Model Segmentation for Storage Efficient Private Federated Learning with Top $r$ Sparsification","authors":"Sajani Vithana, S. Ulukus","doi":"10.1109/CISS56502.2023.10089698","DOIUrl":null,"url":null,"abstract":"In federated learning (FL) with top $r$ sparsification, millions of users collectively train a machine learning (ML) model locally, using their personal data by only communicating the most significant $r$ fraction of updates to reduce the communication cost. It has been shown that the values as well as the indices of these selected (sparse) updates leak information about the users' personal data. In this work, we investigate different methods to carry out user-database communications in FL with top $r$ sparsification efficiently, while guaranteeing information theoretic privacy of users' personal data. These methods incur considerable storage cost. As a solution, we present two schemes with different properties that use MDS coded storage along with a model segmentation mechanism to reduce the storage cost at the expense of a controllable amount of information leakage, to perform private FL with top $r$ sparsification.","PeriodicalId":243775,"journal":{"name":"2023 57th Annual Conference on Information Sciences and Systems (CISS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 57th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS56502.2023.10089698","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In federated learning (FL) with top $r$ sparsification, millions of users collectively train a machine learning (ML) model locally, using their personal data by only communicating the most significant $r$ fraction of updates to reduce the communication cost. It has been shown that the values as well as the indices of these selected (sparse) updates leak information about the users' personal data. In this work, we investigate different methods to carry out user-database communications in FL with top $r$ sparsification efficiently, while guaranteeing information theoretic privacy of users' personal data. These methods incur considerable storage cost. As a solution, we present two schemes with different properties that use MDS coded storage along with a model segmentation mechanism to reduce the storage cost at the expense of a controllable amount of information leakage, to perform private FL with top $r$ sparsification.
基于Top $r$稀疏的存储高效私有联邦学习模型分割
在具有顶级r$稀疏化的联邦学习(FL)中,数百万用户共同在本地训练机器学习(ML)模型,使用他们的个人数据,仅通过传达最重要的r$部分更新来降低通信成本。结果表明,这些选定的(稀疏)更新的值和索引泄露了用户个人数据的信息。在本工作中,我们研究了在保证用户个人数据的信息论隐私性的前提下,如何有效地在FL中进行top $r$稀疏的用户-数据库通信的不同方法。这些方法会产生相当大的存储成本。作为一种解决方案,我们提出了两种不同属性的方案,它们使用MDS编码存储和模型分割机制来降低存储成本,以可控的信息泄漏量为代价,执行具有top $r$稀疏的私有FL。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信