H. Miyajima, Noritaka Shigei, H. Miyajima, N. Shiratori
{"title":"Secure Distributed Processing of NG with Updatable Decomposition Data and Parameters","authors":"H. Miyajima, Noritaka Shigei, H. Miyajima, N. Shiratori","doi":"10.1109/NaNA56854.2022.00067","DOIUrl":null,"url":null,"abstract":"Machine learning using distributed data, such as federative learning (FL) and secure multiparty computation (SMC), is demanded to achieve both utility and confidentiality when using confidential data. There is a trade-off between utility and confidentiality, and in general, SMC can offer better confidentiality than FL and better utility than homomorphic encryption. In machine learning with SMC, confidentiality is improved by decomposing individual data and parameters into multiple pieces, storing each piece on each server, and learning without restoring the data or parameters themselves. However, once the conventional methods randomly decompose data and parameters, the decomposition remains permanently fixed. The fixed decomposition is considered undesirable because it gives malicious attackers more opportunities to attack the data and model. In this paper, we propose a secure distributed processing of neural gas (NG), which is one of unsupervised machine learning. In addition to the decomposition of data, the proposed method can update the decomposition of parameters during learning. Each server can independently update the decomposition of both data and parameters in the proposed method, and the data and parameters are never restored during learning. Our simulation result shows that it can achieve the same level of learning accuracy as the conventional methods.","PeriodicalId":113743,"journal":{"name":"2022 International Conference on Networking and Network Applications (NaNA)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Networking and Network Applications (NaNA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NaNA56854.2022.00067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning using distributed data, such as federative learning (FL) and secure multiparty computation (SMC), is demanded to achieve both utility and confidentiality when using confidential data. There is a trade-off between utility and confidentiality, and in general, SMC can offer better confidentiality than FL and better utility than homomorphic encryption. In machine learning with SMC, confidentiality is improved by decomposing individual data and parameters into multiple pieces, storing each piece on each server, and learning without restoring the data or parameters themselves. However, once the conventional methods randomly decompose data and parameters, the decomposition remains permanently fixed. The fixed decomposition is considered undesirable because it gives malicious attackers more opportunities to attack the data and model. In this paper, we propose a secure distributed processing of neural gas (NG), which is one of unsupervised machine learning. In addition to the decomposition of data, the proposed method can update the decomposition of parameters during learning. Each server can independently update the decomposition of both data and parameters in the proposed method, and the data and parameters are never restored during learning. Our simulation result shows that it can achieve the same level of learning accuracy as the conventional methods.