DAML: Practical Secure Protocol for Data Aggregation Based on Machine Learning

Ping Zhao, Jiaxin Sun, Guanglin Zhang
{"title":"DAML: Practical Secure Protocol for Data Aggregation Based on Machine Learning","authors":"Ping Zhao, Jiaxin Sun, Guanglin Zhang","doi":"10.1145/3404192","DOIUrl":null,"url":null,"abstract":"Data aggregation based on machine learning (ML), in mobile edge computing, allows participants to send ephemeral parameter updates of local ML on their private data instead of the exact data to the untrusted aggregator. However, it still enables the untrusted aggregator to reconstruct participants’ private data, although parameter updates contain significantly less information than the private data. Existing work either incurs extremely high overhead or ignores malicious participants dropping out. The latest research deals with the dropouts with desirable cost, but it is vulnerable to malformed message attacks. To this end, we focus on the data aggregation based on ML in a practical setting where malicious participants may send malformed parameter updates to perturb the total parameter updates learned by the aggregator. Moreover, malicious participants may drop out and collude with other participants or the untrusted aggregator. In such a scenario, we propose a scheme named DAML, which to the best of our knowledge is the first attempt toward verifying participants’ submissions in data aggregation based on ML. The main idea is to validate participants’ submissions via SSVP, a novel secret-shared verification protocol, and then aggregate participants’ parameter updates using SDA, a secure data aggregation protocol. Simulation results demonstrate that DAML can protect participants’ data privacy with preferable overhead.","PeriodicalId":263540,"journal":{"name":"ACM Trans. Sens. Networks","volume":"54 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Trans. Sens. Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3404192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Data aggregation based on machine learning (ML), in mobile edge computing, allows participants to send ephemeral parameter updates of local ML on their private data instead of the exact data to the untrusted aggregator. However, it still enables the untrusted aggregator to reconstruct participants’ private data, although parameter updates contain significantly less information than the private data. Existing work either incurs extremely high overhead or ignores malicious participants dropping out. The latest research deals with the dropouts with desirable cost, but it is vulnerable to malformed message attacks. To this end, we focus on the data aggregation based on ML in a practical setting where malicious participants may send malformed parameter updates to perturb the total parameter updates learned by the aggregator. Moreover, malicious participants may drop out and collude with other participants or the untrusted aggregator. In such a scenario, we propose a scheme named DAML, which to the best of our knowledge is the first attempt toward verifying participants’ submissions in data aggregation based on ML. The main idea is to validate participants’ submissions via SSVP, a novel secret-shared verification protocol, and then aggregate participants’ parameter updates using SDA, a secure data aggregation protocol. Simulation results demonstrate that DAML can protect participants’ data privacy with preferable overhead.
基于机器学习的数据聚合实用安全协议
在移动边缘计算中,基于机器学习(ML)的数据聚合允许参与者在其私有数据上发送本地ML的短暂参数更新,而不是将确切的数据发送给不受信任的聚合器。但是,它仍然允许不受信任的聚合器重构参与者的私有数据,尽管参数更新包含的信息比私有数据少得多。现有的工作要么导致极高的开销,要么忽略了恶意参与者的退出。最新的研究表明,这种方法的成本较低,但容易受到畸形消息攻击。为此,我们将重点放在基于ML的数据聚合上,在实际设置中,恶意参与者可能会发送格式错误的参数更新来干扰聚合器学习到的总参数更新。此外,恶意参与者可能会退出并与其他参与者或不受信任的聚合器勾结。在这种情况下,我们提出了一个名为DAML的方案,据我们所知,这是第一次尝试在基于ML的数据聚合中验证参与者的提交。其主要思想是通过一种新的秘密共享验证协议SSVP验证参与者的提交,然后使用一种安全的数据聚合协议SDA聚合参与者的参数更新。仿真结果表明,DAML能够以较好的开销保护参与者的数据隐私。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信