{"title":"Adversarial Attacks on Deep Neural Network based Modulation Recognition","authors":"Mingqian Liu, Zhenju Zhang","doi":"10.1109/DSA56465.2022.00159","DOIUrl":null,"url":null,"abstract":"Modulation recognition technology based on deep learning (DL) has great advantages in feature extraction and recognition. However, due to the vulnerability of deep neural network (DNN), the automatic modulation recognition model based on DNN is vulnerable to attacks. Some researchers have successfully attacked automatic modulation recognition model-s using adversarial techniques, but the resulting adversarial samples have poor attack performance on high-performance recognition models. Therefore, this paper proposes an attack method based on double loop iteration, which can update the initial conditions of each iteration with the change of the number of iterations when generating adversarial examples. Simulation results show that the proposed attack method has better attack performance than the traditional attack methods.","PeriodicalId":208148,"journal":{"name":"2022 9th International Conference on Dependable Systems and Their Applications (DSA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Dependable Systems and Their Applications (DSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSA56465.2022.00159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Modulation recognition technology based on deep learning (DL) has great advantages in feature extraction and recognition. However, due to the vulnerability of deep neural network (DNN), the automatic modulation recognition model based on DNN is vulnerable to attacks. Some researchers have successfully attacked automatic modulation recognition model-s using adversarial techniques, but the resulting adversarial samples have poor attack performance on high-performance recognition models. Therefore, this paper proposes an attack method based on double loop iteration, which can update the initial conditions of each iteration with the change of the number of iterations when generating adversarial examples. Simulation results show that the proposed attack method has better attack performance than the traditional attack methods.