Yanhui Li , Chen Huang , Yuxin Zhao , Xinjie Du , Junqing Huang , Ye Yuan
{"title":"SFLES: Shuffled differentially private federated learning with early-stopping strategy","authors":"Yanhui Li , Chen Huang , Yuxin Zhao , Xinjie Du , Junqing Huang , Ye Yuan","doi":"10.1016/j.eswa.2025.129970","DOIUrl":null,"url":null,"abstract":"<div><div>Federated Learning (FL) allows multiple clients to collaboratively train a global model without sharing raw data, yet it remains susceptible to privacy attacks. The recently proposed shuffle model of differential privacy (DP) offers a promising solution by leveraging privacy amplification to achieve strong local privacy guarantees while maintaining high utility. However, existing approaches based on this model rely on conventional Gaussian or Laplace mechanisms, which introduce unbounded noise and risk significant data distortion. Furthermore, these methods typically exhibit inefficient privacy budget allocation and suffer from excessive communication overhead and computational costs imposed by fixed training rounds, ultimately degrading performance. To address these limitations, we present SFLES, a novel shuffled differentially private FL framework designed to robustly prevent privacy leakage while optimizing model utility. In particular, SFLES employs Top-<em>k</em> sparsification to compress local model updates and integrates an adaptive, layer-wise bounded noise mechanism based on a symmetric piecewise distribution for fine-grained noise injection. To enhance efficiency, we propose a novel directional similarity-aware aggregation strategy, which prioritizes updates with consistent directional trends, accelerating convergence under DP constraints. Additionally, SFLES incorporates a dynamic early-stopping strategy that tracks update conflict rates and global accuracy trends, dynamically terminating training upon convergence detection and reallocating residual privacy budgets to subsequent rounds for improved utility. Extensive evaluations on MNIST, Fashion-MNIST, and CIFAR-10 demonstrate that SFLES surpasses state-of-the-art alternatives in balancing privacy-utility trade-offs, convergence speed, and communication efficiency.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"299 ","pages":"Article 129970"},"PeriodicalIF":7.5000,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425035857","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) allows multiple clients to collaboratively train a global model without sharing raw data, yet it remains susceptible to privacy attacks. The recently proposed shuffle model of differential privacy (DP) offers a promising solution by leveraging privacy amplification to achieve strong local privacy guarantees while maintaining high utility. However, existing approaches based on this model rely on conventional Gaussian or Laplace mechanisms, which introduce unbounded noise and risk significant data distortion. Furthermore, these methods typically exhibit inefficient privacy budget allocation and suffer from excessive communication overhead and computational costs imposed by fixed training rounds, ultimately degrading performance. To address these limitations, we present SFLES, a novel shuffled differentially private FL framework designed to robustly prevent privacy leakage while optimizing model utility. In particular, SFLES employs Top-k sparsification to compress local model updates and integrates an adaptive, layer-wise bounded noise mechanism based on a symmetric piecewise distribution for fine-grained noise injection. To enhance efficiency, we propose a novel directional similarity-aware aggregation strategy, which prioritizes updates with consistent directional trends, accelerating convergence under DP constraints. Additionally, SFLES incorporates a dynamic early-stopping strategy that tracks update conflict rates and global accuracy trends, dynamically terminating training upon convergence detection and reallocating residual privacy budgets to subsequent rounds for improved utility. Extensive evaluations on MNIST, Fashion-MNIST, and CIFAR-10 demonstrate that SFLES surpasses state-of-the-art alternatives in balancing privacy-utility trade-offs, convergence speed, and communication efficiency.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.