{"title":"一种针对对抗性攻击的自适应随机安全方法","authors":"Lovi Dhamija, Urvashi Garg","doi":"10.1080/19393555.2022.2088429","DOIUrl":null,"url":null,"abstract":"ABSTRACT With the rising trends and use of machine learning algorithms for classification and regression tasks, deep learning has been widely accepted in the Cyber and as well as non-Cyber Domain. Recent researches have shown that machine learning classifiers such as Deep Neural Networks (DNN) can be used to improve the detection against adversarial samples as well as to detect malware in the cyber security domain. However, a recent study in deep learning has found that DNN classifiers are highly vulnerable and can be evaded simply by either performing small modifications in the training model or training data. The work proposed a randomized defensive mechanism with the use of generative adversarial networks to construct more adversaries and then defend against them. Interestingly, we encountered some open challenges highlighting common difficulties faced by defensive mechanisms. We provide a general overview of adversarial attacks and proposed an Adaptive Randomized Algorithm to enhance the robustness of models. Moreover, this work aimed to ensure the security and transferability of deep learning classifiers.","PeriodicalId":103842,"journal":{"name":"Information Security Journal: A Global Perspective","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An adaptive randomized and secured approach against adversarial attacks\",\"authors\":\"Lovi Dhamija, Urvashi Garg\",\"doi\":\"10.1080/19393555.2022.2088429\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT With the rising trends and use of machine learning algorithms for classification and regression tasks, deep learning has been widely accepted in the Cyber and as well as non-Cyber Domain. Recent researches have shown that machine learning classifiers such as Deep Neural Networks (DNN) can be used to improve the detection against adversarial samples as well as to detect malware in the cyber security domain. However, a recent study in deep learning has found that DNN classifiers are highly vulnerable and can be evaded simply by either performing small modifications in the training model or training data. The work proposed a randomized defensive mechanism with the use of generative adversarial networks to construct more adversaries and then defend against them. Interestingly, we encountered some open challenges highlighting common difficulties faced by defensive mechanisms. We provide a general overview of adversarial attacks and proposed an Adaptive Randomized Algorithm to enhance the robustness of models. Moreover, this work aimed to ensure the security and transferability of deep learning classifiers.\",\"PeriodicalId\":103842,\"journal\":{\"name\":\"Information Security Journal: A Global Perspective\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Security Journal: A Global Perspective\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/19393555.2022.2088429\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Security Journal: A Global Perspective","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19393555.2022.2088429","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An adaptive randomized and secured approach against adversarial attacks
ABSTRACT With the rising trends and use of machine learning algorithms for classification and regression tasks, deep learning has been widely accepted in the Cyber and as well as non-Cyber Domain. Recent researches have shown that machine learning classifiers such as Deep Neural Networks (DNN) can be used to improve the detection against adversarial samples as well as to detect malware in the cyber security domain. However, a recent study in deep learning has found that DNN classifiers are highly vulnerable and can be evaded simply by either performing small modifications in the training model or training data. The work proposed a randomized defensive mechanism with the use of generative adversarial networks to construct more adversaries and then defend against them. Interestingly, we encountered some open challenges highlighting common difficulties faced by defensive mechanisms. We provide a general overview of adversarial attacks and proposed an Adaptive Randomized Algorithm to enhance the robustness of models. Moreover, this work aimed to ensure the security and transferability of deep learning classifiers.