{"title":"Adversarial attack and defence of federated learning-based network traffic classification in edge computing environment","authors":"Azizi Ariffin , Faiz Zaki , Hazim Hanif , Nor Badrul Anuar","doi":"10.1016/j.comnet.2025.111739","DOIUrl":null,"url":null,"abstract":"<div><div>Network Traffic Classification (NTC) is vital for network management and security. However, as internet traffic volume increases, centralised model training causes scalability and privacy issues for NTC. To address these issues, distributing NTC model training to multiple edge clients via Federated Learning (FL) provides a solution by reducing latency, improving system scalability, and preserving data privacy. Nonetheless, the distributed nature of FL makes it vulnerable to various adversarial attacks from multiple clients, consequently degrading the model's performance. Most studies focus on a limited range of attacks, often overlooking more advanced and subtle threats, such as backdoor attacks and those based on Generative Adversarial Networks (GANs). Despite the growing attack complexity, existing defensive measures in the NTC domain struggle to mitigate multiple adversarial attack types simultaneously. To validate this claim, this study investigates the vulnerabilities of FL-based NTC training against four types of adversarial attacks: label flipping (LF) and model poisoning, and introduces customized backdoor and GAN-based attack scenarios tailored specifically to FL-based NTC training. When evaluated using the ISCX-VPN 2016 dataset, the results demonstrate that FL-based NTC is vulnerable to all four types of adversarial attacks. For instance, the LF attack reduced accuracy by 98.66 % in a collusive scenario, while the backdoor attack achieved a 40 % success rate. In comparison, the GAN attack lowered the F1 score of the target class by 18 %. Therefore, to strengthen the defense against adversarial attacks, this study proposes a robust conceptual defense framework capable of defending against multiple adversarial attack types simultaneously. The framework incorporates remote attestation scoring, hierarchical training, and an adaptive aggregation mechanism and conducts logic analysis to evaluate its effectiveness. The analysis demonstrates that it successfully maintains the model with 76 % accuracy under multiple adversarial attacks during training compared to an 80 % reduction without defensive measures.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"272 ","pages":"Article 111739"},"PeriodicalIF":4.6000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625007054","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Network Traffic Classification (NTC) is vital for network management and security. However, as internet traffic volume increases, centralised model training causes scalability and privacy issues for NTC. To address these issues, distributing NTC model training to multiple edge clients via Federated Learning (FL) provides a solution by reducing latency, improving system scalability, and preserving data privacy. Nonetheless, the distributed nature of FL makes it vulnerable to various adversarial attacks from multiple clients, consequently degrading the model's performance. Most studies focus on a limited range of attacks, often overlooking more advanced and subtle threats, such as backdoor attacks and those based on Generative Adversarial Networks (GANs). Despite the growing attack complexity, existing defensive measures in the NTC domain struggle to mitigate multiple adversarial attack types simultaneously. To validate this claim, this study investigates the vulnerabilities of FL-based NTC training against four types of adversarial attacks: label flipping (LF) and model poisoning, and introduces customized backdoor and GAN-based attack scenarios tailored specifically to FL-based NTC training. When evaluated using the ISCX-VPN 2016 dataset, the results demonstrate that FL-based NTC is vulnerable to all four types of adversarial attacks. For instance, the LF attack reduced accuracy by 98.66 % in a collusive scenario, while the backdoor attack achieved a 40 % success rate. In comparison, the GAN attack lowered the F1 score of the target class by 18 %. Therefore, to strengthen the defense against adversarial attacks, this study proposes a robust conceptual defense framework capable of defending against multiple adversarial attack types simultaneously. The framework incorporates remote attestation scoring, hierarchical training, and an adaptive aggregation mechanism and conducts logic analysis to evaluate its effectiveness. The analysis demonstrates that it successfully maintains the model with 76 % accuracy under multiple adversarial attacks during training compared to an 80 % reduction without defensive measures.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.