Yongkang Luo , Wenjian Luo , Ruizhuo Zhang , Hongwei Zhang , Yuhui Shi
{"title":"通过安全多方计算实现健壮的点对点学习","authors":"Yongkang Luo , Wenjian Luo , Ruizhuo Zhang , Hongwei Zhang , Yuhui Shi","doi":"10.1016/j.jiixd.2023.08.003","DOIUrl":null,"url":null,"abstract":"<div><p>To solve the data island problem, federated learning (FL) provides a solution paradigm where each client sends the model parameters but not the data to a server for model aggregation. Peer-to-peer (P2P) federated learning further improves the robustness of the system, in which there is no server and each client communicates directly with the other. For secure aggregation, secure multi-party computing (SMPC) protocols have been utilized in peer-to-peer manner. However, the ideal SMPC protocols could fail when some clients drop out. In this paper, we propose a robust peer-to-peer learning (RP2PL) algorithm via SMPC to resist clients dropping out. We improve the segment-based SMPC protocol by adding a check and designing the generation method of random segments. In RP2PL, each client aggregates their models by the improved robust secure multi-part computation protocol when finishes the local training. Experimental results demonstrate that the RP2PL paradigm can mitigate clients dropping out with no significant degradation in performance.</p></div>","PeriodicalId":100790,"journal":{"name":"Journal of Information and Intelligence","volume":"1 4","pages":"Pages 341-351"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949715923000550/pdfft?md5=bc876c86904042971fa81e6e58d46700&pid=1-s2.0-S2949715923000550-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Robust peer-to-peer learning via secure multi-party computation\",\"authors\":\"Yongkang Luo , Wenjian Luo , Ruizhuo Zhang , Hongwei Zhang , Yuhui Shi\",\"doi\":\"10.1016/j.jiixd.2023.08.003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>To solve the data island problem, federated learning (FL) provides a solution paradigm where each client sends the model parameters but not the data to a server for model aggregation. Peer-to-peer (P2P) federated learning further improves the robustness of the system, in which there is no server and each client communicates directly with the other. For secure aggregation, secure multi-party computing (SMPC) protocols have been utilized in peer-to-peer manner. However, the ideal SMPC protocols could fail when some clients drop out. In this paper, we propose a robust peer-to-peer learning (RP2PL) algorithm via SMPC to resist clients dropping out. We improve the segment-based SMPC protocol by adding a check and designing the generation method of random segments. In RP2PL, each client aggregates their models by the improved robust secure multi-part computation protocol when finishes the local training. Experimental results demonstrate that the RP2PL paradigm can mitigate clients dropping out with no significant degradation in performance.</p></div>\",\"PeriodicalId\":100790,\"journal\":{\"name\":\"Journal of Information and Intelligence\",\"volume\":\"1 4\",\"pages\":\"Pages 341-351\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949715923000550/pdfft?md5=bc876c86904042971fa81e6e58d46700&pid=1-s2.0-S2949715923000550-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information and Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949715923000550\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information and Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949715923000550","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust peer-to-peer learning via secure multi-party computation
To solve the data island problem, federated learning (FL) provides a solution paradigm where each client sends the model parameters but not the data to a server for model aggregation. Peer-to-peer (P2P) federated learning further improves the robustness of the system, in which there is no server and each client communicates directly with the other. For secure aggregation, secure multi-party computing (SMPC) protocols have been utilized in peer-to-peer manner. However, the ideal SMPC protocols could fail when some clients drop out. In this paper, we propose a robust peer-to-peer learning (RP2PL) algorithm via SMPC to resist clients dropping out. We improve the segment-based SMPC protocol by adding a check and designing the generation method of random segments. In RP2PL, each client aggregates their models by the improved robust secure multi-part computation protocol when finishes the local training. Experimental results demonstrate that the RP2PL paradigm can mitigate clients dropping out with no significant degradation in performance.