{"title":"Interpretation of Adversarial Attack on Unsupervised Domain Adaptation","authors":"Mst. Tasnim Pervin, A. Huq","doi":"10.1109/ETCCE54784.2021.9689917","DOIUrl":null,"url":null,"abstract":"Recent advances in deep neural networks has accelerated the process of automation in several fields like image processing, object detection, segmentation tasks and many more. Though, it has been also proved that these deep neural networks or deep CNNs need large scale dataset to be trained on to produce desired output. Supplying huge dataset often becomes difficult for many fields. Domain adaptation is supposed to be a possible way to cope with this problem of large data requirement as it allows model to gain experience from one large source dataset during training and exploit that experience during working with any smaller, related but technically different dataset. But the threat remains when the concept of adversarial machine learning strikes. Like many other deep learning models, adaptive models seem to be vulnerable to adversarial attacks. We target to analysis how these attack techniques from adversarial machine learning affect unsupervised adaptive models’ performance for two related but structurally different dataset like MNIST and MNISTM. We used three different attack techniques called FGSM, MIFGSM and PGD for the experiment. Experiments show the deadly effect of these attack techniques on both of the baseline and adaptive models where adaptive model seem to be more vulnerable than the baseline non-adaptive model.","PeriodicalId":208038,"journal":{"name":"2021 Emerging Technology in Computing, Communication and Electronics (ETCCE)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Emerging Technology in Computing, Communication and Electronics (ETCCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETCCE54784.2021.9689917","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advances in deep neural networks has accelerated the process of automation in several fields like image processing, object detection, segmentation tasks and many more. Though, it has been also proved that these deep neural networks or deep CNNs need large scale dataset to be trained on to produce desired output. Supplying huge dataset often becomes difficult for many fields. Domain adaptation is supposed to be a possible way to cope with this problem of large data requirement as it allows model to gain experience from one large source dataset during training and exploit that experience during working with any smaller, related but technically different dataset. But the threat remains when the concept of adversarial machine learning strikes. Like many other deep learning models, adaptive models seem to be vulnerable to adversarial attacks. We target to analysis how these attack techniques from adversarial machine learning affect unsupervised adaptive models’ performance for two related but structurally different dataset like MNIST and MNISTM. We used three different attack techniques called FGSM, MIFGSM and PGD for the experiment. Experiments show the deadly effect of these attack techniques on both of the baseline and adaptive models where adaptive model seem to be more vulnerable than the baseline non-adaptive model.