{"title":"Non-cooperative Learning for Robust Spectrum Sharing in Connected Vehicles with Malicious Agents","authors":"Haoran Peng, Hanif Rahbari, S. Yang, Li-Chun Wang","doi":"10.1109/GLOBECOM48099.2022.10000791","DOIUrl":null,"url":null,"abstract":"Multi-agent reinforcement learning (MARL) has pre-viously been employed for efficient spectrum sharing among co-operative connected vehicles. However, we show in this paper that existing MARL models are not robust against non-cooperative or malicious agents (vehicles) whose spectrum selection strategy may cause congestion and reduce the spectrum utilization. For example, a selfish (non-cooperative) agent aims to only maximize its own spectrum utilization, irrespective of the overall system efficiency and spectrum availability to others. We investigate and analyze the MARL-based spectrum sharing problem in connected vehicles including vehicles (agents) with selfish or sabotage strategies. We then develop a theoretical framework to consider the selfish agent, and study various adversarial scenarios (including attacks with disruptive goals) via simulations. Our robust MARL approach where “robust” agents are trained to be prepared for selfish agents in testing phase achieves more resiliency in the presence of a selfish agent and even a sabotage one; achieving 6.7%~20% and 50.7% ~ 138% higher unicast throughput and broadcast delivery success rate over regular benign agents, respectively.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"692 9","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM48099.2022.10000791","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-agent reinforcement learning (MARL) has pre-viously been employed for efficient spectrum sharing among co-operative connected vehicles. However, we show in this paper that existing MARL models are not robust against non-cooperative or malicious agents (vehicles) whose spectrum selection strategy may cause congestion and reduce the spectrum utilization. For example, a selfish (non-cooperative) agent aims to only maximize its own spectrum utilization, irrespective of the overall system efficiency and spectrum availability to others. We investigate and analyze the MARL-based spectrum sharing problem in connected vehicles including vehicles (agents) with selfish or sabotage strategies. We then develop a theoretical framework to consider the selfish agent, and study various adversarial scenarios (including attacks with disruptive goals) via simulations. Our robust MARL approach where “robust” agents are trained to be prepared for selfish agents in testing phase achieves more resiliency in the presence of a selfish agent and even a sabotage one; achieving 6.7%~20% and 50.7% ~ 138% higher unicast throughput and broadcast delivery success rate over regular benign agents, respectively.