{"title":"FedHide:通过隐藏在邻居中进行联合学习","authors":"Hyunsin Park, Sungrack Yun","doi":"arxiv-2409.07808","DOIUrl":null,"url":null,"abstract":"We propose a prototype-based federated learning method designed for embedding\nnetworks in classification or verification tasks. Our focus is on scenarios\nwhere each client has data from a single class. The main challenge is to\ndevelop an embedding network that can distinguish between different classes\nwhile adhering to privacy constraints. Sharing true class prototypes with the\nserver or other clients could potentially compromise sensitive information. To\ntackle this issue, we propose a proxy class prototype that will be shared among\nclients instead of the true class prototype. Our approach generates proxy class\nprototypes by linearly combining them with their nearest neighbors. This\ntechnique conceals the true class prototype while enabling clients to learn\ndiscriminative embedding networks. We compare our method to alternative\ntechniques, such as adding random Gaussian noise and using random selection\nwith cosine similarity constraints. Furthermore, we evaluate the robustness of\nour approach against gradient inversion attacks and introduce a measure for\nprototype leakage. This measure quantifies the extent of private information\nrevealed when sharing the proposed proxy class prototype. Moreover, we provide\na theoretical analysis of the convergence properties of our approach. Our\nproposed method for federated learning from scratch demonstrates its\neffectiveness through empirical results on three benchmark datasets: CIFAR-100,\nVoxCeleb1, and VGGFace2.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedHide: Federated Learning by Hiding in the Neighbors\",\"authors\":\"Hyunsin Park, Sungrack Yun\",\"doi\":\"arxiv-2409.07808\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a prototype-based federated learning method designed for embedding\\nnetworks in classification or verification tasks. Our focus is on scenarios\\nwhere each client has data from a single class. The main challenge is to\\ndevelop an embedding network that can distinguish between different classes\\nwhile adhering to privacy constraints. Sharing true class prototypes with the\\nserver or other clients could potentially compromise sensitive information. To\\ntackle this issue, we propose a proxy class prototype that will be shared among\\nclients instead of the true class prototype. Our approach generates proxy class\\nprototypes by linearly combining them with their nearest neighbors. This\\ntechnique conceals the true class prototype while enabling clients to learn\\ndiscriminative embedding networks. We compare our method to alternative\\ntechniques, such as adding random Gaussian noise and using random selection\\nwith cosine similarity constraints. Furthermore, we evaluate the robustness of\\nour approach against gradient inversion attacks and introduce a measure for\\nprototype leakage. This measure quantifies the extent of private information\\nrevealed when sharing the proposed proxy class prototype. Moreover, we provide\\na theoretical analysis of the convergence properties of our approach. Our\\nproposed method for federated learning from scratch demonstrates its\\neffectiveness through empirical results on three benchmark datasets: CIFAR-100,\\nVoxCeleb1, and VGGFace2.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07808\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07808","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FedHide: Federated Learning by Hiding in the Neighbors
We propose a prototype-based federated learning method designed for embedding
networks in classification or verification tasks. Our focus is on scenarios
where each client has data from a single class. The main challenge is to
develop an embedding network that can distinguish between different classes
while adhering to privacy constraints. Sharing true class prototypes with the
server or other clients could potentially compromise sensitive information. To
tackle this issue, we propose a proxy class prototype that will be shared among
clients instead of the true class prototype. Our approach generates proxy class
prototypes by linearly combining them with their nearest neighbors. This
technique conceals the true class prototype while enabling clients to learn
discriminative embedding networks. We compare our method to alternative
techniques, such as adding random Gaussian noise and using random selection
with cosine similarity constraints. Furthermore, we evaluate the robustness of
our approach against gradient inversion attacks and introduce a measure for
prototype leakage. This measure quantifies the extent of private information
revealed when sharing the proposed proxy class prototype. Moreover, we provide
a theoretical analysis of the convergence properties of our approach. Our
proposed method for federated learning from scratch demonstrates its
effectiveness through empirical results on three benchmark datasets: CIFAR-100,
VoxCeleb1, and VGGFace2.