无监督域自适应对抗性攻击的解释

Mst. Tasnim Pervin, A. Huq
{"title":"无监督域自适应对抗性攻击的解释","authors":"Mst. Tasnim Pervin, A. Huq","doi":"10.1109/ETCCE54784.2021.9689917","DOIUrl":null,"url":null,"abstract":"Recent advances in deep neural networks has accelerated the process of automation in several fields like image processing, object detection, segmentation tasks and many more. Though, it has been also proved that these deep neural networks or deep CNNs need large scale dataset to be trained on to produce desired output. Supplying huge dataset often becomes difficult for many fields. Domain adaptation is supposed to be a possible way to cope with this problem of large data requirement as it allows model to gain experience from one large source dataset during training and exploit that experience during working with any smaller, related but technically different dataset. But the threat remains when the concept of adversarial machine learning strikes. Like many other deep learning models, adaptive models seem to be vulnerable to adversarial attacks. We target to analysis how these attack techniques from adversarial machine learning affect unsupervised adaptive models’ performance for two related but structurally different dataset like MNIST and MNISTM. We used three different attack techniques called FGSM, MIFGSM and PGD for the experiment. Experiments show the deadly effect of these attack techniques on both of the baseline and adaptive models where adaptive model seem to be more vulnerable than the baseline non-adaptive model.","PeriodicalId":208038,"journal":{"name":"2021 Emerging Technology in Computing, Communication and Electronics (ETCCE)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interpretation of Adversarial Attack on Unsupervised Domain Adaptation\",\"authors\":\"Mst. Tasnim Pervin, A. Huq\",\"doi\":\"10.1109/ETCCE54784.2021.9689917\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in deep neural networks has accelerated the process of automation in several fields like image processing, object detection, segmentation tasks and many more. Though, it has been also proved that these deep neural networks or deep CNNs need large scale dataset to be trained on to produce desired output. Supplying huge dataset often becomes difficult for many fields. Domain adaptation is supposed to be a possible way to cope with this problem of large data requirement as it allows model to gain experience from one large source dataset during training and exploit that experience during working with any smaller, related but technically different dataset. But the threat remains when the concept of adversarial machine learning strikes. Like many other deep learning models, adaptive models seem to be vulnerable to adversarial attacks. We target to analysis how these attack techniques from adversarial machine learning affect unsupervised adaptive models’ performance for two related but structurally different dataset like MNIST and MNISTM. We used three different attack techniques called FGSM, MIFGSM and PGD for the experiment. Experiments show the deadly effect of these attack techniques on both of the baseline and adaptive models where adaptive model seem to be more vulnerable than the baseline non-adaptive model.\",\"PeriodicalId\":208038,\"journal\":{\"name\":\"2021 Emerging Technology in Computing, Communication and Electronics (ETCCE)\",\"volume\":\"104 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Emerging Technology in Computing, Communication and Electronics (ETCCE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ETCCE54784.2021.9689917\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Emerging Technology in Computing, Communication and Electronics (ETCCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETCCE54784.2021.9689917","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络的最新进展加速了图像处理、目标检测、分割任务等多个领域的自动化进程。然而,也证明了这些深度神经网络或深度cnn需要大规模的数据集来训练以产生期望的输出。对于许多领域来说,提供庞大的数据集往往变得困难。领域适应被认为是应对大数据需求问题的一种可能方法,因为它允许模型在训练期间从一个大型源数据集获得经验,并在处理任何较小的、相关但技术上不同的数据集时利用该经验。但是,当对抗性机器学习的概念出现时,威胁仍然存在。与许多其他深度学习模型一样,自适应模型似乎很容易受到对抗性攻击。我们的目标是分析这些来自对抗性机器学习的攻击技术如何影响两个相关但结构不同的数据集(如MNIST和MNISTM)的无监督自适应模型的性能。我们在实验中使用了FGSM、MIFGSM和PGD三种不同的攻击技术。实验表明,这些攻击技术对基线模型和自适应模型都有致命的影响,其中自适应模型似乎比基线非自适应模型更脆弱。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Interpretation of Adversarial Attack on Unsupervised Domain Adaptation
Recent advances in deep neural networks has accelerated the process of automation in several fields like image processing, object detection, segmentation tasks and many more. Though, it has been also proved that these deep neural networks or deep CNNs need large scale dataset to be trained on to produce desired output. Supplying huge dataset often becomes difficult for many fields. Domain adaptation is supposed to be a possible way to cope with this problem of large data requirement as it allows model to gain experience from one large source dataset during training and exploit that experience during working with any smaller, related but technically different dataset. But the threat remains when the concept of adversarial machine learning strikes. Like many other deep learning models, adaptive models seem to be vulnerable to adversarial attacks. We target to analysis how these attack techniques from adversarial machine learning affect unsupervised adaptive models’ performance for two related but structurally different dataset like MNIST and MNISTM. We used three different attack techniques called FGSM, MIFGSM and PGD for the experiment. Experiments show the deadly effect of these attack techniques on both of the baseline and adaptive models where adaptive model seem to be more vulnerable than the baseline non-adaptive model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信