Secure Distributed Processing of NG with Updatable Decomposition Data and Parameters

H. Miyajima, Noritaka Shigei, H. Miyajima, N. Shiratori
{"title":"Secure Distributed Processing of NG with Updatable Decomposition Data and Parameters","authors":"H. Miyajima, Noritaka Shigei, H. Miyajima, N. Shiratori","doi":"10.1109/NaNA56854.2022.00067","DOIUrl":null,"url":null,"abstract":"Machine learning using distributed data, such as federative learning (FL) and secure multiparty computation (SMC), is demanded to achieve both utility and confidentiality when using confidential data. There is a trade-off between utility and confidentiality, and in general, SMC can offer better confidentiality than FL and better utility than homomorphic encryption. In machine learning with SMC, confidentiality is improved by decomposing individual data and parameters into multiple pieces, storing each piece on each server, and learning without restoring the data or parameters themselves. However, once the conventional methods randomly decompose data and parameters, the decomposition remains permanently fixed. The fixed decomposition is considered undesirable because it gives malicious attackers more opportunities to attack the data and model. In this paper, we propose a secure distributed processing of neural gas (NG), which is one of unsupervised machine learning. In addition to the decomposition of data, the proposed method can update the decomposition of parameters during learning. Each server can independently update the decomposition of both data and parameters in the proposed method, and the data and parameters are never restored during learning. Our simulation result shows that it can achieve the same level of learning accuracy as the conventional methods.","PeriodicalId":113743,"journal":{"name":"2022 International Conference on Networking and Network Applications (NaNA)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Networking and Network Applications (NaNA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NaNA56854.2022.00067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning using distributed data, such as federative learning (FL) and secure multiparty computation (SMC), is demanded to achieve both utility and confidentiality when using confidential data. There is a trade-off between utility and confidentiality, and in general, SMC can offer better confidentiality than FL and better utility than homomorphic encryption. In machine learning with SMC, confidentiality is improved by decomposing individual data and parameters into multiple pieces, storing each piece on each server, and learning without restoring the data or parameters themselves. However, once the conventional methods randomly decompose data and parameters, the decomposition remains permanently fixed. The fixed decomposition is considered undesirable because it gives malicious attackers more opportunities to attack the data and model. In this paper, we propose a secure distributed processing of neural gas (NG), which is one of unsupervised machine learning. In addition to the decomposition of data, the proposed method can update the decomposition of parameters during learning. Each server can independently update the decomposition of both data and parameters in the proposed method, and the data and parameters are never restored during learning. Our simulation result shows that it can achieve the same level of learning accuracy as the conventional methods.
具有可更新分解数据和参数的NG安全分布式处理
使用分布式数据的机器学习,如联邦学习(FL)和安全多方计算(SMC),需要在使用机密数据时实现实用性和保密性。在实用性和保密性之间存在权衡,一般来说,SMC可以提供比FL更好的保密性和比同态加密更好的实用性。在使用SMC的机器学习中,通过将单个数据和参数分解为多个部分,将每个部分存储在每个服务器上,并且在不恢复数据或参数本身的情况下进行学习,可以提高机密性。然而,一旦传统方法随机分解数据和参数,分解是永久固定的。固定分解被认为是不可取的,因为它为恶意攻击者提供了更多攻击数据和模型的机会。在本文中,我们提出了一种安全的分布式神经气体处理方法,它是无监督机器学习的一种。除了对数据进行分解外,该方法还可以在学习过程中对分解后的参数进行更新。在该方法中,每个服务器都可以独立更新数据和参数的分解,并且在学习过程中不会恢复数据和参数。仿真结果表明,该方法可以达到与传统方法相同的学习精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信