Ashwin Muthuraman A., Balaaditya M., Snofy D. Dunston, M. V
{"title":"对抗训练对防御DeepFool攻击的有效性分析","authors":"Ashwin Muthuraman A., Balaaditya M., Snofy D. Dunston, M. V","doi":"10.1109/ICCT56969.2023.10075774","DOIUrl":null,"url":null,"abstract":"Medical image diagnosis is a time-consuming process when done manually, where the predictions are subjected to human error. Various Deep Learning models have brought about an efficient and reliable automated system for medical image analysis. However, these models are highly vulnerable to attacks, upon exposure of which the models lose their reliability and misclassify the input images. Adversarial attack is one such technique which fools the deep learning models with deceptive data. DeepFool is an adversarial attack that efficiently computes perturbations that fool deep networks. With the help of two different datasets, we studied the impact of DeepFool attack on EfficientNet-B0 model in this research. There are several defense mechanisms to protect the model against various attacks. Adversarial training is one such defense method, which trains the model towards a particular attack. In this study, we have also analysed how effectively adversarial training would defend a model and make it robust.","PeriodicalId":128100,"journal":{"name":"2023 3rd International Conference on Intelligent Communication and Computational Techniques (ICCT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analysis of the Effect of Adversarial Training in Defending EfficientNet-B0 Model from DeepFool Attack\",\"authors\":\"Ashwin Muthuraman A., Balaaditya M., Snofy D. Dunston, M. V\",\"doi\":\"10.1109/ICCT56969.2023.10075774\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Medical image diagnosis is a time-consuming process when done manually, where the predictions are subjected to human error. Various Deep Learning models have brought about an efficient and reliable automated system for medical image analysis. However, these models are highly vulnerable to attacks, upon exposure of which the models lose their reliability and misclassify the input images. Adversarial attack is one such technique which fools the deep learning models with deceptive data. DeepFool is an adversarial attack that efficiently computes perturbations that fool deep networks. With the help of two different datasets, we studied the impact of DeepFool attack on EfficientNet-B0 model in this research. There are several defense mechanisms to protect the model against various attacks. Adversarial training is one such defense method, which trains the model towards a particular attack. In this study, we have also analysed how effectively adversarial training would defend a model and make it robust.\",\"PeriodicalId\":128100,\"journal\":{\"name\":\"2023 3rd International Conference on Intelligent Communication and Computational Techniques (ICCT)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 3rd International Conference on Intelligent Communication and Computational Techniques (ICCT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCT56969.2023.10075774\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 3rd International Conference on Intelligent Communication and Computational Techniques (ICCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCT56969.2023.10075774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analysis of the Effect of Adversarial Training in Defending EfficientNet-B0 Model from DeepFool Attack
Medical image diagnosis is a time-consuming process when done manually, where the predictions are subjected to human error. Various Deep Learning models have brought about an efficient and reliable automated system for medical image analysis. However, these models are highly vulnerable to attacks, upon exposure of which the models lose their reliability and misclassify the input images. Adversarial attack is one such technique which fools the deep learning models with deceptive data. DeepFool is an adversarial attack that efficiently computes perturbations that fool deep networks. With the help of two different datasets, we studied the impact of DeepFool attack on EfficientNet-B0 model in this research. There are several defense mechanisms to protect the model against various attacks. Adversarial training is one such defense method, which trains the model towards a particular attack. In this study, we have also analysed how effectively adversarial training would defend a model and make it robust.