Ishai Rosenberg, A. Shabtai, Y. Elovici, L. Rokach
{"title":"序列压缩:一种针对基于API调用的RNN变体的对抗示例的防御方法","authors":"Ishai Rosenberg, A. Shabtai, Y. Elovici, L. Rokach","doi":"10.1109/IJCNN52387.2021.9534432","DOIUrl":null,"url":null,"abstract":"Adversarial examples are known to mislead deep learning models so that the models will classify them incorrectly, even in domains where such models have achieved state-of-the-art performance. Until recently, research on both adversarial attack and defense methods focused on computer vision, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, referred to as sequence squeezing, aimed at making RNN variant (e.g., LSTM) classifiers more robust against such attacks. Our method differs from existing defense methods, which were designed only for non-sequence based models. We also implement three additional defense methods inspired by recently published CNN defense methods as baselines for our method. Using sequence squeezing, we were able to decrease the effectiveness of such adversarial attacks from 99.9% to 15%, outperforming all of the baseline defense methods.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Sequence Squeezing: A Defense Method Against Adversarial Examples for API Call-Based RNN Variants\",\"authors\":\"Ishai Rosenberg, A. Shabtai, Y. Elovici, L. Rokach\",\"doi\":\"10.1109/IJCNN52387.2021.9534432\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial examples are known to mislead deep learning models so that the models will classify them incorrectly, even in domains where such models have achieved state-of-the-art performance. Until recently, research on both adversarial attack and defense methods focused on computer vision, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, referred to as sequence squeezing, aimed at making RNN variant (e.g., LSTM) classifiers more robust against such attacks. Our method differs from existing defense methods, which were designed only for non-sequence based models. We also implement three additional defense methods inspired by recently published CNN defense methods as baselines for our method. Using sequence squeezing, we were able to decrease the effectiveness of such adversarial attacks from 99.9% to 15%, outperforming all of the baseline defense methods.\",\"PeriodicalId\":396583,\"journal\":{\"name\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"123 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN52387.2021.9534432\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN52387.2021.9534432","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sequence Squeezing: A Defense Method Against Adversarial Examples for API Call-Based RNN Variants
Adversarial examples are known to mislead deep learning models so that the models will classify them incorrectly, even in domains where such models have achieved state-of-the-art performance. Until recently, research on both adversarial attack and defense methods focused on computer vision, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, referred to as sequence squeezing, aimed at making RNN variant (e.g., LSTM) classifiers more robust against such attacks. Our method differs from existing defense methods, which were designed only for non-sequence based models. We also implement three additional defense methods inspired by recently published CNN defense methods as baselines for our method. Using sequence squeezing, we were able to decrease the effectiveness of such adversarial attacks from 99.9% to 15%, outperforming all of the baseline defense methods.