{"title":"忽略噪音:在强化学习中使用自编码器对抗对抗性攻击(闪电演讲)","authors":"William Aiken, Hyoungshick Kim","doi":"10.1109/ICSSA45270.2018.00028","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) algorithms learn and explore nearly any state any number of times in their environment, but minute adversarial attacks cripple these agents. In this work, we define our threat model against RL agents as such: Adversarial agents introduce small permutations to the input data via black-box models with the goal of reducing the optimality of the agent. We focus on pre-processing adversarial images before they enter the network to reconstruct the ground-truth images.","PeriodicalId":223442,"journal":{"name":"2018 International Conference on Software Security and Assurance (ICSSA)","volume":"421 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ignore the Noise: Using Autoencoders against Adversarial Attacks in Reinforcement Learning (Lightning Talk)\",\"authors\":\"William Aiken, Hyoungshick Kim\",\"doi\":\"10.1109/ICSSA45270.2018.00028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning (RL) algorithms learn and explore nearly any state any number of times in their environment, but minute adversarial attacks cripple these agents. In this work, we define our threat model against RL agents as such: Adversarial agents introduce small permutations to the input data via black-box models with the goal of reducing the optimality of the agent. We focus on pre-processing adversarial images before they enter the network to reconstruct the ground-truth images.\",\"PeriodicalId\":223442,\"journal\":{\"name\":\"2018 International Conference on Software Security and Assurance (ICSSA)\",\"volume\":\"421 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 International Conference on Software Security and Assurance (ICSSA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSSA45270.2018.00028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Software Security and Assurance (ICSSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSSA45270.2018.00028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Ignore the Noise: Using Autoencoders against Adversarial Attacks in Reinforcement Learning (Lightning Talk)
Reinforcement learning (RL) algorithms learn and explore nearly any state any number of times in their environment, but minute adversarial attacks cripple these agents. In this work, we define our threat model against RL agents as such: Adversarial agents introduce small permutations to the input data via black-box models with the goal of reducing the optimality of the agent. We focus on pre-processing adversarial images before they enter the network to reconstruct the ground-truth images.