Sitong Li , Yifan Liu , Fan Feng , Yi Liu , Xiaofei Li , Zhenpeng Liu
{"title":"HierFedPDP:具有个性化差异隐私的分层联合学习","authors":"Sitong Li , Yifan Liu , Fan Feng , Yi Liu , Xiaofei Li , Zhenpeng Liu","doi":"10.1016/j.jisa.2024.103890","DOIUrl":null,"url":null,"abstract":"<div><p>Federated Learning (FL) is an innovative approach that enables multiple parties to collaboratively train a machine learning model while keeping their data private. This method significantly enhances data security as it avoids sharing raw data among participants. However, a critical challenge in FL is the potential leakage of sensitive information through shared model updates. To address this, differential privacy techniques, which add random noise to data or model updates, are used to safeguard individual data points from being inferred. Traditional approaches to differential privacy typically utilize a fixed privacy budget, which may not account for the varying sensitivity of data, potentially affecting model accuracy. To overcome these limitations, we introduce HierFedPDP, a new FL framework that optimizes data privacy and model performance. HierFedPDP employs a three-tier client–edge–cloud architecture, maximizing the use of edge computing to alleviate the computational load on the central server. At the core of HierFedPDP is a personalized local differential privacy mechanism that tailors privacy settings based on data sensitivity, thereby enhancing data protection while maintaining high utility. Our framework not only fortifies privacy but also improves model accuracy. Specifically, experiments on the MNIST dataset show that HierFedPDP outperforms existing models, increasing accuracy by 0.84% to 2.36%, and CIFAR-10 has also achieved effective improvements. This research advances the capabilities of FL in protecting data privacy and provides valuable insights for designing more efficient distributed learning systems.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"86 ","pages":"Article 103890"},"PeriodicalIF":3.8000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HierFedPDP:Hierarchical federated learning with personalized differential privacy\",\"authors\":\"Sitong Li , Yifan Liu , Fan Feng , Yi Liu , Xiaofei Li , Zhenpeng Liu\",\"doi\":\"10.1016/j.jisa.2024.103890\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Federated Learning (FL) is an innovative approach that enables multiple parties to collaboratively train a machine learning model while keeping their data private. This method significantly enhances data security as it avoids sharing raw data among participants. However, a critical challenge in FL is the potential leakage of sensitive information through shared model updates. To address this, differential privacy techniques, which add random noise to data or model updates, are used to safeguard individual data points from being inferred. Traditional approaches to differential privacy typically utilize a fixed privacy budget, which may not account for the varying sensitivity of data, potentially affecting model accuracy. To overcome these limitations, we introduce HierFedPDP, a new FL framework that optimizes data privacy and model performance. HierFedPDP employs a three-tier client–edge–cloud architecture, maximizing the use of edge computing to alleviate the computational load on the central server. At the core of HierFedPDP is a personalized local differential privacy mechanism that tailors privacy settings based on data sensitivity, thereby enhancing data protection while maintaining high utility. Our framework not only fortifies privacy but also improves model accuracy. Specifically, experiments on the MNIST dataset show that HierFedPDP outperforms existing models, increasing accuracy by 0.84% to 2.36%, and CIFAR-10 has also achieved effective improvements. This research advances the capabilities of FL in protecting data privacy and provides valuable insights for designing more efficient distributed learning systems.</p></div>\",\"PeriodicalId\":48638,\"journal\":{\"name\":\"Journal of Information Security and Applications\",\"volume\":\"86 \",\"pages\":\"Article 103890\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Security and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214212624001923\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212624001923","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
HierFedPDP:Hierarchical federated learning with personalized differential privacy
Federated Learning (FL) is an innovative approach that enables multiple parties to collaboratively train a machine learning model while keeping their data private. This method significantly enhances data security as it avoids sharing raw data among participants. However, a critical challenge in FL is the potential leakage of sensitive information through shared model updates. To address this, differential privacy techniques, which add random noise to data or model updates, are used to safeguard individual data points from being inferred. Traditional approaches to differential privacy typically utilize a fixed privacy budget, which may not account for the varying sensitivity of data, potentially affecting model accuracy. To overcome these limitations, we introduce HierFedPDP, a new FL framework that optimizes data privacy and model performance. HierFedPDP employs a three-tier client–edge–cloud architecture, maximizing the use of edge computing to alleviate the computational load on the central server. At the core of HierFedPDP is a personalized local differential privacy mechanism that tailors privacy settings based on data sensitivity, thereby enhancing data protection while maintaining high utility. Our framework not only fortifies privacy but also improves model accuracy. Specifically, experiments on the MNIST dataset show that HierFedPDP outperforms existing models, increasing accuracy by 0.84% to 2.36%, and CIFAR-10 has also achieved effective improvements. This research advances the capabilities of FL in protecting data privacy and provides valuable insights for designing more efficient distributed learning systems.
期刊介绍:
Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.