{"title":"通过状态分解实现分散优化的隐私保护推拉法","authors":"Huqiang Cheng;Xiaofeng Liao;Huaqing Li;Qingguo Lü;You Zhao","doi":"10.1109/TSIPN.2024.3402430","DOIUrl":null,"url":null,"abstract":"Distributed optimization is manifesting great potential in multiple fields, e.g., machine learning, control, resource allocation, etc. Existing decentralized optimization algorithms require sharing explicit state information among the agents, which raises the risk of private information leakage. To ensure privacy security, combining information security mechanisms, such as differential privacy and homomorphic encryption, with traditional decentralized optimization algorithms is a commonly used means. However, this may either sacrifice optimization accuracy or incur a heavy computational burden. To overcome these shortcomings, we develop a novel privacy-preserving decentralized optimization algorithm, named PPSD, that combines gradient tracking with a state decomposition mechanism. Specifically, each agent decomposes its state associated with the gradient into two substates. One substate is used for interaction with neighboring agents, and the other substate containing private information acts only on the first substate and thus is entirely agnostic to other agents. When the objective function is smooth and satisfies the Polyak-Łojasiewicz (PL) condition, PPSD attains an \n<inline-formula><tex-math>$R$</tex-math></inline-formula>\n-linear convergence rate. Moreover, the algorithm can preserve the normal agents' private information from being leaked to honest-but-curious attackers. Simulations further confirm the results.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"10 ","pages":"513-526"},"PeriodicalIF":3.0000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Privacy-Preserving Push-Pull Method for Decentralized Optimization via State Decomposition\",\"authors\":\"Huqiang Cheng;Xiaofeng Liao;Huaqing Li;Qingguo Lü;You Zhao\",\"doi\":\"10.1109/TSIPN.2024.3402430\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Distributed optimization is manifesting great potential in multiple fields, e.g., machine learning, control, resource allocation, etc. Existing decentralized optimization algorithms require sharing explicit state information among the agents, which raises the risk of private information leakage. To ensure privacy security, combining information security mechanisms, such as differential privacy and homomorphic encryption, with traditional decentralized optimization algorithms is a commonly used means. However, this may either sacrifice optimization accuracy or incur a heavy computational burden. To overcome these shortcomings, we develop a novel privacy-preserving decentralized optimization algorithm, named PPSD, that combines gradient tracking with a state decomposition mechanism. Specifically, each agent decomposes its state associated with the gradient into two substates. One substate is used for interaction with neighboring agents, and the other substate containing private information acts only on the first substate and thus is entirely agnostic to other agents. When the objective function is smooth and satisfies the Polyak-Łojasiewicz (PL) condition, PPSD attains an \\n<inline-formula><tex-math>$R$</tex-math></inline-formula>\\n-linear convergence rate. Moreover, the algorithm can preserve the normal agents' private information from being leaked to honest-but-curious attackers. Simulations further confirm the results.\",\"PeriodicalId\":56268,\"journal\":{\"name\":\"IEEE Transactions on Signal and Information Processing over Networks\",\"volume\":\"10 \",\"pages\":\"513-526\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Signal and Information Processing over Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10535197/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal and Information Processing over Networks","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10535197/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Privacy-Preserving Push-Pull Method for Decentralized Optimization via State Decomposition
Distributed optimization is manifesting great potential in multiple fields, e.g., machine learning, control, resource allocation, etc. Existing decentralized optimization algorithms require sharing explicit state information among the agents, which raises the risk of private information leakage. To ensure privacy security, combining information security mechanisms, such as differential privacy and homomorphic encryption, with traditional decentralized optimization algorithms is a commonly used means. However, this may either sacrifice optimization accuracy or incur a heavy computational burden. To overcome these shortcomings, we develop a novel privacy-preserving decentralized optimization algorithm, named PPSD, that combines gradient tracking with a state decomposition mechanism. Specifically, each agent decomposes its state associated with the gradient into two substates. One substate is used for interaction with neighboring agents, and the other substate containing private information acts only on the first substate and thus is entirely agnostic to other agents. When the objective function is smooth and satisfies the Polyak-Łojasiewicz (PL) condition, PPSD attains an
$R$
-linear convergence rate. Moreover, the algorithm can preserve the normal agents' private information from being leaked to honest-but-curious attackers. Simulations further confirm the results.
期刊介绍:
The IEEE Transactions on Signal and Information Processing over Networks publishes high-quality papers that extend the classical notions of processing of signals defined over vector spaces (e.g. time and space) to processing of signals and information (data) defined over networks, potentially dynamically varying. In signal processing over networks, the topology of the network may define structural relationships in the data, or may constrain processing of the data. Topics include distributed algorithms for filtering, detection, estimation, adaptation and learning, model selection, data fusion, and diffusion or evolution of information over such networks, and applications of distributed signal processing.