Van Hau Le, T. Nguyen, K. Nguyen, Satinderbir Singh
{"title":"基于联合能量和关联的地面作战车辆反拦截","authors":"Van Hau Le, T. Nguyen, K. Nguyen, Satinderbir Singh","doi":"10.1109/MILCOM55135.2022.10017791","DOIUrl":null,"url":null,"abstract":"Today, ground combat vehicles (GCVs) in Warfighter Information Network-Tactical (WIN-T) systems are highly interconnected and autonomous. However, protecting a large number of wireless communication links against the interception of enemy in a dynamic environment is challenging. Because of GCV mobility, the Low Probability of Intercept (LPI) capacity is easily violated, in particular when multiple interception techniques are used simultaneously. In this paper, we investigate the problem of preserving LPI capability under traditional optimization and Deep Reinforcement Learning (DRL) approaches. Unlike prior work, we propose an anti-interception strategy against both energy-based and correlation-based interceptors techniques. Our strategy jointly optimizes power allocation (PA) and spreading factor assignment (SA) of the WIN-T to avoid these interceptors. The problem is mathematically formulated as a non-convex optimization model, and therefore we solve it by advanced techniques such as decomposition and difference of convex functions (DC). To obtain the optimized solution in near real-time, we design a Multi-Agent Deep Reinforcement Learning (MADRL) strategy. Our numerical results show the performance of the proposed MADRL strategy is close to the optimal solution, making it applicable for the practical systems.","PeriodicalId":239804,"journal":{"name":"MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Joint Energy and Correlation Based Anti-Intercepts for Ground Combat Vehicles\",\"authors\":\"Van Hau Le, T. Nguyen, K. Nguyen, Satinderbir Singh\",\"doi\":\"10.1109/MILCOM55135.2022.10017791\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Today, ground combat vehicles (GCVs) in Warfighter Information Network-Tactical (WIN-T) systems are highly interconnected and autonomous. However, protecting a large number of wireless communication links against the interception of enemy in a dynamic environment is challenging. Because of GCV mobility, the Low Probability of Intercept (LPI) capacity is easily violated, in particular when multiple interception techniques are used simultaneously. In this paper, we investigate the problem of preserving LPI capability under traditional optimization and Deep Reinforcement Learning (DRL) approaches. Unlike prior work, we propose an anti-interception strategy against both energy-based and correlation-based interceptors techniques. Our strategy jointly optimizes power allocation (PA) and spreading factor assignment (SA) of the WIN-T to avoid these interceptors. The problem is mathematically formulated as a non-convex optimization model, and therefore we solve it by advanced techniques such as decomposition and difference of convex functions (DC). To obtain the optimized solution in near real-time, we design a Multi-Agent Deep Reinforcement Learning (MADRL) strategy. Our numerical results show the performance of the proposed MADRL strategy is close to the optimal solution, making it applicable for the practical systems.\",\"PeriodicalId\":239804,\"journal\":{\"name\":\"MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)\",\"volume\":\"2014 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MILCOM55135.2022.10017791\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MILCOM55135.2022.10017791","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Joint Energy and Correlation Based Anti-Intercepts for Ground Combat Vehicles
Today, ground combat vehicles (GCVs) in Warfighter Information Network-Tactical (WIN-T) systems are highly interconnected and autonomous. However, protecting a large number of wireless communication links against the interception of enemy in a dynamic environment is challenging. Because of GCV mobility, the Low Probability of Intercept (LPI) capacity is easily violated, in particular when multiple interception techniques are used simultaneously. In this paper, we investigate the problem of preserving LPI capability under traditional optimization and Deep Reinforcement Learning (DRL) approaches. Unlike prior work, we propose an anti-interception strategy against both energy-based and correlation-based interceptors techniques. Our strategy jointly optimizes power allocation (PA) and spreading factor assignment (SA) of the WIN-T to avoid these interceptors. The problem is mathematically formulated as a non-convex optimization model, and therefore we solve it by advanced techniques such as decomposition and difference of convex functions (DC). To obtain the optimized solution in near real-time, we design a Multi-Agent Deep Reinforcement Learning (MADRL) strategy. Our numerical results show the performance of the proposed MADRL strategy is close to the optimal solution, making it applicable for the practical systems.