J. Loevenich, Jonas Bode, Tobias Hürten, Luca Liberto, Florian Spelter, Paulo H. L. Rettore, R. Lopes
{"title":"针对基于强化学习的战术网络的对抗性攻击:一个案例研究","authors":"J. Loevenich, Jonas Bode, Tobias Hürten, Luca Liberto, Florian Spelter, Paulo H. L. Rettore, R. Lopes","doi":"10.1109/MILCOM55135.2022.10017788","DOIUrl":null,"url":null,"abstract":"Dynamic changes caused by conditions such as challenging terrain or hostile encounters force tactical networks to be highly adaptable. To tackle this problem, new proposals implement Reinforcement Learning (RL) based solutions for routing in such complex environments. As high security is another crucial demand for tactical networks, we examine the vulnerability of one such solution against the novel attack vector of adversarial attacks specifically targeting RL algorithms. Utilizing a suite of varying attack methods, we find the targeted solution to be vulnerable to multiple attacks. Further, we found the best attacks to exploit knowledge about the victim agent to a high degree. Lastly, we outline the need for additional research exploring more complex attack strategies to expose the vulnerabilities of other RL proposals for tactical networks. This investigation may also ignite the design/implementation of defensive measures to increase robustness in vulnerable systems.","PeriodicalId":239804,"journal":{"name":"MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Adversarial Attacks Against Reinforcement Learning Based Tactical Networks: A Case Study\",\"authors\":\"J. Loevenich, Jonas Bode, Tobias Hürten, Luca Liberto, Florian Spelter, Paulo H. L. Rettore, R. Lopes\",\"doi\":\"10.1109/MILCOM55135.2022.10017788\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dynamic changes caused by conditions such as challenging terrain or hostile encounters force tactical networks to be highly adaptable. To tackle this problem, new proposals implement Reinforcement Learning (RL) based solutions for routing in such complex environments. As high security is another crucial demand for tactical networks, we examine the vulnerability of one such solution against the novel attack vector of adversarial attacks specifically targeting RL algorithms. Utilizing a suite of varying attack methods, we find the targeted solution to be vulnerable to multiple attacks. Further, we found the best attacks to exploit knowledge about the victim agent to a high degree. Lastly, we outline the need for additional research exploring more complex attack strategies to expose the vulnerabilities of other RL proposals for tactical networks. This investigation may also ignite the design/implementation of defensive measures to increase robustness in vulnerable systems.\",\"PeriodicalId\":239804,\"journal\":{\"name\":\"MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MILCOM55135.2022.10017788\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MILCOM55135.2022.10017788","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial Attacks Against Reinforcement Learning Based Tactical Networks: A Case Study
Dynamic changes caused by conditions such as challenging terrain or hostile encounters force tactical networks to be highly adaptable. To tackle this problem, new proposals implement Reinforcement Learning (RL) based solutions for routing in such complex environments. As high security is another crucial demand for tactical networks, we examine the vulnerability of one such solution against the novel attack vector of adversarial attacks specifically targeting RL algorithms. Utilizing a suite of varying attack methods, we find the targeted solution to be vulnerable to multiple attacks. Further, we found the best attacks to exploit knowledge about the victim agent to a high degree. Lastly, we outline the need for additional research exploring more complex attack strategies to expose the vulnerabilities of other RL proposals for tactical networks. This investigation may also ignite the design/implementation of defensive measures to increase robustness in vulnerable systems.