{"title":"An innovative multi-agent approach for robust cyber–physical systems using vertical federated learning","authors":"Shivani Gaba , Ishan Budhiraja , Vimal Kumar , Sahil Garg , Mohammad Mehedi Hassan","doi":"10.1016/j.adhoc.2024.103578","DOIUrl":null,"url":null,"abstract":"<div><p>Federated learning presents a compelling approach to training artificial intelligence systems in decentralized settings, prioritizing data safety over traditional centralized training methods. Understanding correlations among higher-level threats exhibiting abnormal behavior in the data stream becomes paramount to developing cyber–physical systems resilient to diverse attacks within a continuous data exchange framework. This work introduces a novel vertical federated multi-agent learning framework to address the challenges of modeling attacker and defender agents in stationary and non-stationary vertical federated learning environments. Our approach uniquely applies synchronous Deep Q-Network (DQN) based agents in stationary environments, facilitating convergence towards optimal strategies. Conversely, in non-stationary contexts, we employ synchronous Advantage Actor–Critic (A2C) based agents, adapting to the dynamic nature of multi-agent vertical federated reinforcement learning (VFRL) environments. This methodology enables us to simulate and analyze the adversarial interplay between attacker and defender agents, ensuring robust policy development. Our exhaustive analysis demonstrates the effectiveness of our approach, showcasing its capability to learn optimal policies in both static and dynamic setups, thus significantly advancing the field of cyber-security in federated learning contexts. To evaluate the effectiveness of our approach, we have done a comparative analysis with its baseline schemes. The findings of our study show significant enhancements compared to the standard methods, confirming the efficacy of our methodology. This progress dramatically enhances the area of cyber-security in the context of federated learning by facilitating the formulation of substantial policies. The proposed scheme attains 15.93%, 32.91%, 31.02%, and 47.26% higher results as compared to the A3C, DDQN, DQN, and Reinforce, respectively.</p></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ad Hoc Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570870524001896","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning presents a compelling approach to training artificial intelligence systems in decentralized settings, prioritizing data safety over traditional centralized training methods. Understanding correlations among higher-level threats exhibiting abnormal behavior in the data stream becomes paramount to developing cyber–physical systems resilient to diverse attacks within a continuous data exchange framework. This work introduces a novel vertical federated multi-agent learning framework to address the challenges of modeling attacker and defender agents in stationary and non-stationary vertical federated learning environments. Our approach uniquely applies synchronous Deep Q-Network (DQN) based agents in stationary environments, facilitating convergence towards optimal strategies. Conversely, in non-stationary contexts, we employ synchronous Advantage Actor–Critic (A2C) based agents, adapting to the dynamic nature of multi-agent vertical federated reinforcement learning (VFRL) environments. This methodology enables us to simulate and analyze the adversarial interplay between attacker and defender agents, ensuring robust policy development. Our exhaustive analysis demonstrates the effectiveness of our approach, showcasing its capability to learn optimal policies in both static and dynamic setups, thus significantly advancing the field of cyber-security in federated learning contexts. To evaluate the effectiveness of our approach, we have done a comparative analysis with its baseline schemes. The findings of our study show significant enhancements compared to the standard methods, confirming the efficacy of our methodology. This progress dramatically enhances the area of cyber-security in the context of federated learning by facilitating the formulation of substantial policies. The proposed scheme attains 15.93%, 32.91%, 31.02%, and 47.26% higher results as compared to the A3C, DDQN, DQN, and Reinforce, respectively.
期刊介绍:
The Ad Hoc Networks is an international and archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in ad hoc and sensor networking areas. The Ad Hoc Networks considers original, high quality and unpublished contributions addressing all aspects of ad hoc and sensor networks. Specific areas of interest include, but are not limited to:
Mobile and Wireless Ad Hoc Networks
Sensor Networks
Wireless Local and Personal Area Networks
Home Networks
Ad Hoc Networks of Autonomous Intelligent Systems
Novel Architectures for Ad Hoc and Sensor Networks
Self-organizing Network Architectures and Protocols
Transport Layer Protocols
Routing protocols (unicast, multicast, geocast, etc.)
Media Access Control Techniques
Error Control Schemes
Power-Aware, Low-Power and Energy-Efficient Designs
Synchronization and Scheduling Issues
Mobility Management
Mobility-Tolerant Communication Protocols
Location Tracking and Location-based Services
Resource and Information Management
Security and Fault-Tolerance Issues
Hardware and Software Platforms, Systems, and Testbeds
Experimental and Prototype Results
Quality-of-Service Issues
Cross-Layer Interactions
Scalability Issues
Performance Analysis and Simulation of Protocols.