{"title":"Data Compliance Utilization Method Based on Adaptive Differential Privacy and Federated Learning.","authors":"Haiyan Kang, Bing Wu, Chong Zhang","doi":"10.1142/S0129065725500601","DOIUrl":null,"url":null,"abstract":"<p><p>Federated learning (FL), as a method that coordinates multiple clients to train models together without handing over local data, is naturally privacy-preserving for data. However, there is still a risk that malicious attackers can steal intermediate parameters and infer the user's original data during the model training, thereby leaking sensitive data privacy. To address the above problems, we propose an adaptive differential privacy blockchain federated learning (ADP-BCFL) method to accomplish the compliant use of distributed data while ensuring security. First, utilize blockchain to accomplish secure storage and valid querying of user summary data. Second, propose an adaptive DP mechanism to be applied in the process of federal learning, which adaptively adjusts the threshold size of parameter tailoring according to the parameter characteristics, controls the amount of introduced noise, and ensures a good global model accuracy while effectively solving the problem of inference attack. Finally, the ADP-BCFL method was validated on the MNIST, Fashion MNIST datasets and spatiotemporal dataset to effectively balance model performance and privacy.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550060"},"PeriodicalIF":6.4000,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of neural systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S0129065725500601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL), as a method that coordinates multiple clients to train models together without handing over local data, is naturally privacy-preserving for data. However, there is still a risk that malicious attackers can steal intermediate parameters and infer the user's original data during the model training, thereby leaking sensitive data privacy. To address the above problems, we propose an adaptive differential privacy blockchain federated learning (ADP-BCFL) method to accomplish the compliant use of distributed data while ensuring security. First, utilize blockchain to accomplish secure storage and valid querying of user summary data. Second, propose an adaptive DP mechanism to be applied in the process of federal learning, which adaptively adjusts the threshold size of parameter tailoring according to the parameter characteristics, controls the amount of introduced noise, and ensures a good global model accuracy while effectively solving the problem of inference attack. Finally, the ADP-BCFL method was validated on the MNIST, Fashion MNIST datasets and spatiotemporal dataset to effectively balance model performance and privacy.