{"title":"Privacy-Preserving Personalized Decentralized Learning With Fast Convergence","authors":"Jing Qiao;Zhenzhen Xie;Zhigao Zheng;Xiao Zhang;Zhenyu Zhang;Qun Zhang;Dongxiao Yu","doi":"10.1109/TCE.2024.3475370","DOIUrl":null,"url":null,"abstract":"Personalized decentralized learning aims to train individual personalized models for each client to adapt to Non-IID data distributions and heterogeneous environments. However, the distributed nature of decentralized learning is insufficient for protecting client training data from gradient leakage danger. In this paper, we investigate a \n<underline>p</u>\nrivacy-preserving \n<underline>p</u>\nersonalized \n<underline>d</u>\necentralized \n<underline>l</u>\nearning optimization mechanism instead of traditional SGD. We design the P2DL mechanism to optimize our proposed objective function, whereby adjusting the regularization term parameter for a resilient local-global trade-off. Instead of exchanging gradients or models, auxiliary variables with knowledge can be transferred among clients to avoid model inversion and reconstruction attacks. We also provide theoretical convergence guarantees for both synchronous and asynchronous settings. Particularly, in case of synchronous communication, its convergence rate \n<inline-formula> <tex-math>$\\mathcal {O}\\left ({{{}\\frac {1}{k}}}\\right)$ </tex-math></inline-formula>\n matches with the optimal result in decentralized learning, where k is the number of communication rounds. Extensive experiments are conducted to verify the effectiveness of newly proposed P2DL comparing with the state of the arts.","PeriodicalId":13208,"journal":{"name":"IEEE Transactions on Consumer Electronics","volume":"70 4","pages":"6618-6629"},"PeriodicalIF":4.3000,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Consumer Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10706834/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Personalized decentralized learning aims to train individual personalized models for each client to adapt to Non-IID data distributions and heterogeneous environments. However, the distributed nature of decentralized learning is insufficient for protecting client training data from gradient leakage danger. In this paper, we investigate a
p
rivacy-preserving
p
ersonalized
d
ecentralized
l
earning optimization mechanism instead of traditional SGD. We design the P2DL mechanism to optimize our proposed objective function, whereby adjusting the regularization term parameter for a resilient local-global trade-off. Instead of exchanging gradients or models, auxiliary variables with knowledge can be transferred among clients to avoid model inversion and reconstruction attacks. We also provide theoretical convergence guarantees for both synchronous and asynchronous settings. Particularly, in case of synchronous communication, its convergence rate
$\mathcal {O}\left ({{{}\frac {1}{k}}}\right)$
matches with the optimal result in decentralized learning, where k is the number of communication rounds. Extensive experiments are conducted to verify the effectiveness of newly proposed P2DL comparing with the state of the arts.
期刊介绍:
The main focus for the IEEE Transactions on Consumer Electronics is the engineering and research aspects of the theory, design, construction, manufacture or end use of mass market electronics, systems, software and services for consumers.