Afnan Alhindi;Saad Al-Ahmadi;Mohamed Maher Ben Ismail
{"title":"Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach","authors":"Afnan Alhindi;Saad Al-Ahmadi;Mohamed Maher Ben Ismail","doi":"10.1109/ACCESS.2025.3528575","DOIUrl":null,"url":null,"abstract":"Machine Learning (ML) has been exploited across diverse fields with significant success. However, the deployment of ML models on resource-constrained devices, such as edge devices, has remained challenging due to the limited computing resources. Moreover, training such models using private data is prone to serious privacy risks resulting from inadvertent disclosure of sensitive information. Split Learning (SL) has emerged as a promising technique to mitigate these risks through partitioning neural networks into the client and the server subnets. One should note that although only the extracted features are transmitted to the server, sensitive information can still be unwittingly revealed. Existing approaches addressing this privacy concern in SL struggle to maintain a balance of privacy and utility. This research introduces a novel privacy-preserving split learning approach that integrates: 1) Adversarial learning and 2) Network channel pruning. Specifically, adversarial learning aims to minimize the risk of sensitive data leakage while maximizing the performance of the target prediction task. Furthermore, the channel pruning performed jointly with the adversarial training allows the model to dynamically adjust and reactivate the pruned channels. The association of these two techniques makes the intermediate representations (features) exchanged between the client and the server models less informative and more robust against data reconstruction attacks. Accordingly, the proposed approach enhances data privacy without ceding the model’s performance in achieving the intended utility task. The contributions of this research were validated and assessed using benchmark datasets. The experiments demonstrated the superior defense ability, against data reconstruction attacks, of the proposed approach in comparison with relevant state-of-the-art approaches. In particular, the SSIM between the original data and the data reconstructed by the attacker, achieved by our approach, decreased significantly by 57%. In summary, the obtained quantitative and qualitative results proved the efficiency of the proposed approach in balancing privacy and utility for typical split learning frameworks.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"10094-10110"},"PeriodicalIF":3.4000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10838505","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10838505/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Machine Learning (ML) has been exploited across diverse fields with significant success. However, the deployment of ML models on resource-constrained devices, such as edge devices, has remained challenging due to the limited computing resources. Moreover, training such models using private data is prone to serious privacy risks resulting from inadvertent disclosure of sensitive information. Split Learning (SL) has emerged as a promising technique to mitigate these risks through partitioning neural networks into the client and the server subnets. One should note that although only the extracted features are transmitted to the server, sensitive information can still be unwittingly revealed. Existing approaches addressing this privacy concern in SL struggle to maintain a balance of privacy and utility. This research introduces a novel privacy-preserving split learning approach that integrates: 1) Adversarial learning and 2) Network channel pruning. Specifically, adversarial learning aims to minimize the risk of sensitive data leakage while maximizing the performance of the target prediction task. Furthermore, the channel pruning performed jointly with the adversarial training allows the model to dynamically adjust and reactivate the pruned channels. The association of these two techniques makes the intermediate representations (features) exchanged between the client and the server models less informative and more robust against data reconstruction attacks. Accordingly, the proposed approach enhances data privacy without ceding the model’s performance in achieving the intended utility task. The contributions of this research were validated and assessed using benchmark datasets. The experiments demonstrated the superior defense ability, against data reconstruction attacks, of the proposed approach in comparison with relevant state-of-the-art approaches. In particular, the SSIM between the original data and the data reconstructed by the attacker, achieved by our approach, decreased significantly by 57%. In summary, the obtained quantitative and qualitative results proved the efficiency of the proposed approach in balancing privacy and utility for typical split learning frameworks.
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.