Hongjin Yao, Yisheng Li, Yunpeng Sun, Zhichao Lian
{"title":"Selective real-time adversarial perturbations against deep reinforcement learning agents","authors":"Hongjin Yao, Yisheng Li, Yunpeng Sun, Zhichao Lian","doi":"10.1049/cps2.12065","DOIUrl":null,"url":null,"abstract":"<p>Recent work has shown that deep reinforcement learning (DRL) is vulnerable to adversarial attacks, so that exploiting vulnerabilities in DRL systems through adversarial attack techniques has become a necessary prerequisite for building robust DRL systems. Compared to traditional deep learning systems, DRL systems are characterised by long sequential decisions rather than one-step decision, so attackers must perform multi-step attacks on them. To successfully attack a DRL system, the number of attacks must be minimised to avoid detecting by the victim agent and to ensure the effectiveness of the attack. Some selective attack methods proposed in recent researches, that is, attacking an agent at partial time steps, are not applicable to real-time attack scenarios, although they can avoid detecting by the victim agent. A real-time selective attack method that is applicable to environments with discrete action spaces is proposed. Firstly, the optimal attack threshold <i>T</i> for performing selective attacks in the environment <i>Env</i> is determined. Then, the observation states corresponding to when the value of the action preference function of the victim agent in multiple eposides exceeds the threshold <i>T</i> are added to the training set according to this threshold. Finally, a universal perturbation is generated based on this training set, and it is used to perform real-time selective attacks on the victim agent. Comparative experiments show that our attack method can perform real-time attacks while maintaining the attack effect and stealthiness.</p>","PeriodicalId":36881,"journal":{"name":"IET Cyber-Physical Systems: Theory and Applications","volume":"9 1","pages":"41-49"},"PeriodicalIF":1.7000,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cps2.12065","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Cyber-Physical Systems: Theory and Applications","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cps2.12065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Recent work has shown that deep reinforcement learning (DRL) is vulnerable to adversarial attacks, so that exploiting vulnerabilities in DRL systems through adversarial attack techniques has become a necessary prerequisite for building robust DRL systems. Compared to traditional deep learning systems, DRL systems are characterised by long sequential decisions rather than one-step decision, so attackers must perform multi-step attacks on them. To successfully attack a DRL system, the number of attacks must be minimised to avoid detecting by the victim agent and to ensure the effectiveness of the attack. Some selective attack methods proposed in recent researches, that is, attacking an agent at partial time steps, are not applicable to real-time attack scenarios, although they can avoid detecting by the victim agent. A real-time selective attack method that is applicable to environments with discrete action spaces is proposed. Firstly, the optimal attack threshold T for performing selective attacks in the environment Env is determined. Then, the observation states corresponding to when the value of the action preference function of the victim agent in multiple eposides exceeds the threshold T are added to the training set according to this threshold. Finally, a universal perturbation is generated based on this training set, and it is used to perform real-time selective attacks on the victim agent. Comparative experiments show that our attack method can perform real-time attacks while maintaining the attack effect and stealthiness.