{"title":"Gradient Reconstruction Protection Based on Sparse Learning and Gradient Perturbation in IoV","authors":"Jia Zhao, Xinyu Rao, Bokai Yang, Yanchun Wang, Jiaqi He, Hongliang Ma, Wenjia Niu, Wei Wang","doi":"10.1155/int/9253392","DOIUrl":null,"url":null,"abstract":"<div>\n <p>Existing research indicates that original federated learning is not absolutely secure; attackers can infer the original training data based on reconstructed gradient information. Therefore, we will further investigate methods to protect data privacy and prevent adversaries from reconstructing sensitive training samples from shared gradients. To achieve this, we propose a defense strategy called SLGD, which enhances model robustness by combining sparse learning and gradient perturbation techniques. The core idea of this approach consists of two parts. First, before processing training data at the RSU, we preprocess the data using sparse techniques to reduce data transmission and compress data size. Second, the strategy extracts feature representations from the model and performs gradient filtering based on the <i>l</i><sub>2</sub> norm of this layer. Selected gradient values are then perturbed using Von Mises–Fisher (VMF) distribution to obfuscate gradient information, thereby defending against gradient reconstruction attacks and ensuring model security. Finally, we validate the effectiveness and superiority of the proposed method across different datasets and attack scenarios.</p>\n </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9253392","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/int/9253392","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Existing research indicates that original federated learning is not absolutely secure; attackers can infer the original training data based on reconstructed gradient information. Therefore, we will further investigate methods to protect data privacy and prevent adversaries from reconstructing sensitive training samples from shared gradients. To achieve this, we propose a defense strategy called SLGD, which enhances model robustness by combining sparse learning and gradient perturbation techniques. The core idea of this approach consists of two parts. First, before processing training data at the RSU, we preprocess the data using sparse techniques to reduce data transmission and compress data size. Second, the strategy extracts feature representations from the model and performs gradient filtering based on the l2 norm of this layer. Selected gradient values are then perturbed using Von Mises–Fisher (VMF) distribution to obfuscate gradient information, thereby defending against gradient reconstruction attacks and ensuring model security. Finally, we validate the effectiveness and superiority of the proposed method across different datasets and attack scenarios.
期刊介绍:
The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.