Jiamin Shen, Li Xu, Xu Wan, Jixuan Chai, Chunlong Fan
{"title":"Research on Constant Perturbation Strategy for Deep Reinforcement Learning","authors":"Jiamin Shen, Li Xu, Xu Wan, Jixuan Chai, Chunlong Fan","doi":"10.1145/3590003.3590101","DOIUrl":null,"url":null,"abstract":"The development of attack algorithms for deep reinforcement learning is an important part of its security research. In this paper, we propose a deep reinforcement constant perturbation strategy approach for deep reinforcement learning with long-range time-series dependence from the perspective of the sequence of interaction between an agent and its environment.The algorithm is based on a small amount of historical interaction information, and a constant perturbation is designed to disrupt the long-range temporal association of the deep reinforcement learning algorithm based on sensitive region selection to achieve the attack effect.The experimental results show that the constant perturbation based on time series has a good effect, i.e. inducing agents to make frequent wrong decisions and get minimal reward. At the same time, this algorithm still has an attacking effect on the defensively trained agents, and it effectively reduces the number of computations adversarial perturbations.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3590003.3590101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The development of attack algorithms for deep reinforcement learning is an important part of its security research. In this paper, we propose a deep reinforcement constant perturbation strategy approach for deep reinforcement learning with long-range time-series dependence from the perspective of the sequence of interaction between an agent and its environment.The algorithm is based on a small amount of historical interaction information, and a constant perturbation is designed to disrupt the long-range temporal association of the deep reinforcement learning algorithm based on sensitive region selection to achieve the attack effect.The experimental results show that the constant perturbation based on time series has a good effect, i.e. inducing agents to make frequent wrong decisions and get minimal reward. At the same time, this algorithm still has an attacking effect on the defensively trained agents, and it effectively reduces the number of computations adversarial perturbations.