Hezhe Sun;Yufei Wang;Huiwen Yang;Kaixuan Huo;Yuzhe Li
{"title":"模型训练中具有针对性隐私意识的策略梯度传输:斯塔克尔伯格博弈分析","authors":"Hezhe Sun;Yufei Wang;Huiwen Yang;Kaixuan Huo;Yuzhe Li","doi":"10.1109/TAI.2024.3389611","DOIUrl":null,"url":null,"abstract":"Privacy-aware machine learning paradigms have sparked widespread concern due to their ability to safeguard the local privacy of data owners, preventing the leakage of private information to untrustworthy platforms or malicious third parties. This article focuses on characterizing the interactions between the learner and the data owner within this privacy-aware training process. Here, the data owner hesitates to transmit the original gradient to the learner due to potential cybersecurity issues, such as gradient leakage and membership inference. To address this concern, we propose a Stackelberg game framework that models the training process. In this framework, the data owner's objective is not to maximize the discrepancy between the learner's obtained gradient and the true gradient but rather to ensure that the learner obtains a gradient closely resembling one deliberately designed by the data owner, while the learner's objective is to recover the true gradient as accurately as possible. We derive the optimal encoder and decoder using mismatched cost functions and characterize the equilibrium for specific cases, balancing model accuracy and local privacy. Numerical examples illustrate the main results, and we conclude with expanding discussions to suggest future investigations into reliable countermeasure designs.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 9","pages":"4635-4648"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Strategic Gradient Transmission With Targeted Privacy-Awareness in Model Training: A Stackelberg Game Analysis\",\"authors\":\"Hezhe Sun;Yufei Wang;Huiwen Yang;Kaixuan Huo;Yuzhe Li\",\"doi\":\"10.1109/TAI.2024.3389611\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Privacy-aware machine learning paradigms have sparked widespread concern due to their ability to safeguard the local privacy of data owners, preventing the leakage of private information to untrustworthy platforms or malicious third parties. This article focuses on characterizing the interactions between the learner and the data owner within this privacy-aware training process. Here, the data owner hesitates to transmit the original gradient to the learner due to potential cybersecurity issues, such as gradient leakage and membership inference. To address this concern, we propose a Stackelberg game framework that models the training process. In this framework, the data owner's objective is not to maximize the discrepancy between the learner's obtained gradient and the true gradient but rather to ensure that the learner obtains a gradient closely resembling one deliberately designed by the data owner, while the learner's objective is to recover the true gradient as accurately as possible. We derive the optimal encoder and decoder using mismatched cost functions and characterize the equilibrium for specific cases, balancing model accuracy and local privacy. Numerical examples illustrate the main results, and we conclude with expanding discussions to suggest future investigations into reliable countermeasure designs.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"5 9\",\"pages\":\"4635-4648\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10502336/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10502336/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Strategic Gradient Transmission With Targeted Privacy-Awareness in Model Training: A Stackelberg Game Analysis
Privacy-aware machine learning paradigms have sparked widespread concern due to their ability to safeguard the local privacy of data owners, preventing the leakage of private information to untrustworthy platforms or malicious third parties. This article focuses on characterizing the interactions between the learner and the data owner within this privacy-aware training process. Here, the data owner hesitates to transmit the original gradient to the learner due to potential cybersecurity issues, such as gradient leakage and membership inference. To address this concern, we propose a Stackelberg game framework that models the training process. In this framework, the data owner's objective is not to maximize the discrepancy between the learner's obtained gradient and the true gradient but rather to ensure that the learner obtains a gradient closely resembling one deliberately designed by the data owner, while the learner's objective is to recover the true gradient as accurately as possible. We derive the optimal encoder and decoder using mismatched cost functions and characterize the equilibrium for specific cases, balancing model accuracy and local privacy. Numerical examples illustrate the main results, and we conclude with expanding discussions to suggest future investigations into reliable countermeasure designs.