下载PDF
{"title":"Privacy-Preserving Ranking-Based Model Integration for Federated Learning","authors":"Yuma Takeda, Kimihiro Mizutani","doi":"10.1002/tee.70064","DOIUrl":null,"url":null,"abstract":"<p>Federated learning enables collaborative model training while preserving data privacy. However, existing schemes often require access to clients' dataset sizes, posing privacy risks. This study proposes a novel scheme that conceals dataset sizes using Secure Multi-Party Computation (SMPC) to securely compare and rank clients based on their dataset sizes. The rankings are utilized for model aggregation, employing ranking-based weighting to achieve accurate model reinforcement while managing computation and communication overhead. Experiments on CIFAR-10 demonstrated the proposed scheme achieves 86% accuracy, surpassing FedAvg's 85%. Additionally, we confirmed that our scheme achieved scalability and efficiency in large-scale federated learning scenarios. © 2025 The Author(s). <i>IEEJ Transactions on Electrical and Electronic Engineering</i> published by Institute of Electrical Engineers of Japan and Wiley Periodicals LLC.</p>","PeriodicalId":13435,"journal":{"name":"IEEJ Transactions on Electrical and Electronic Engineering","volume":"20 11","pages":"1829-1831"},"PeriodicalIF":1.1000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tee.70064","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEJ Transactions on Electrical and Electronic Engineering","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/tee.70064","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
引用
批量引用
Abstract
Federated learning enables collaborative model training while preserving data privacy. However, existing schemes often require access to clients' dataset sizes, posing privacy risks. This study proposes a novel scheme that conceals dataset sizes using Secure Multi-Party Computation (SMPC) to securely compare and rank clients based on their dataset sizes. The rankings are utilized for model aggregation, employing ranking-based weighting to achieve accurate model reinforcement while managing computation and communication overhead. Experiments on CIFAR-10 demonstrated the proposed scheme achieves 86% accuracy, surpassing FedAvg's 85%. Additionally, we confirmed that our scheme achieved scalability and efficiency in large-scale federated learning scenarios. © 2025 The Author(s). IEEJ Transactions on Electrical and Electronic Engineering published by Institute of Electrical Engineers of Japan and Wiley Periodicals LLC.
基于隐私保护排序的联邦学习模型集成
联邦学习支持协作模型训练,同时保护数据隐私。然而,现有的方案通常需要访问客户的数据集大小,从而带来隐私风险。本研究提出了一种新的方案,该方案使用安全多方计算(SMPC)隐藏数据集大小,以根据客户端数据集大小进行安全比较和排名。排名用于模型聚合,采用基于排名的加权来实现准确的模型强化,同时管理计算和通信开销。在CIFAR-10上的实验表明,该方案的准确率达到了86%,超过了fedag的85%。此外,我们证实了我们的方案在大规模联邦学习场景中实现了可扩展性和效率。©2025作者。电气与电子工程学报,日本电气工程师学会和Wiley期刊公司出版。
本文章由计算机程序翻译,如有差异,请以英文原文为准。