网络安全和计算机视觉的对抗性机器学习:当前的发展和挑战

IF 4.4 2区 数学 Q1 STATISTICS & PROBABILITY
B. Xi
{"title":"网络安全和计算机视觉的对抗性机器学习:当前的发展和挑战","authors":"B. Xi","doi":"10.1002/wics.1511","DOIUrl":null,"url":null,"abstract":"We provide a comprehensive overview of adversarial machine learning focusing on two application domains, that is, cybersecurity and computer vision. Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques—they are vulnerable to carefully crafted attacks from malicious adversaries. For example, deep neural networks fail to correctly classify adversarial images, which are generated by adding imperceptible perturbations to clean images. We first discuss three main categories of attacks against machine learning techniques—poisoning attacks, evasion attacks, and privacy attacks. Then the corresponding defense approaches are introduced along with the weakness and limitations of the existing defense approaches. We notice adversarial samples in cybersecurity and computer vision are fundamentally different. While adversarial samples in cybersecurity often have different properties/distributions compared with training data, adversarial images in computer vision are created with minor input perturbations. This further complicates the development of robust learning techniques, because a robust learning technique must withstand different types of attacks.","PeriodicalId":47779,"journal":{"name":"Wiley Interdisciplinary Reviews-Computational Statistics","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2020-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/wics.1511","citationCount":"15","resultStr":"{\"title\":\"Adversarial machine learning for cybersecurity and computer vision: Current developments and challenges\",\"authors\":\"B. Xi\",\"doi\":\"10.1002/wics.1511\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We provide a comprehensive overview of adversarial machine learning focusing on two application domains, that is, cybersecurity and computer vision. Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques—they are vulnerable to carefully crafted attacks from malicious adversaries. For example, deep neural networks fail to correctly classify adversarial images, which are generated by adding imperceptible perturbations to clean images. We first discuss three main categories of attacks against machine learning techniques—poisoning attacks, evasion attacks, and privacy attacks. Then the corresponding defense approaches are introduced along with the weakness and limitations of the existing defense approaches. We notice adversarial samples in cybersecurity and computer vision are fundamentally different. While adversarial samples in cybersecurity often have different properties/distributions compared with training data, adversarial images in computer vision are created with minor input perturbations. This further complicates the development of robust learning techniques, because a robust learning technique must withstand different types of attacks.\",\"PeriodicalId\":47779,\"journal\":{\"name\":\"Wiley Interdisciplinary Reviews-Computational Statistics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2020-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1002/wics.1511\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Wiley Interdisciplinary Reviews-Computational Statistics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1002/wics.1511\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"STATISTICS & PROBABILITY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wiley Interdisciplinary Reviews-Computational Statistics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1002/wics.1511","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 15

摘要

我们全面概述了对抗性机器学习,重点关注两个应用领域,即网络安全和计算机视觉。对抗性机器学习的研究解决了对机器学习技术广泛应用的重大威胁——它们很容易受到恶意对手精心设计的攻击。例如,深度神经网络无法正确地对对抗性图像进行分类,而对抗性图像是通过向干净的图像添加难以察觉的扰动而生成的。我们首先讨论了针对机器学习技术的三类主要攻击——中毒攻击、逃避攻击和隐私攻击。然后介绍了相应的防御方法,以及现有防御方法的弱点和局限性。我们注意到网络安全和计算机视觉中的对抗性样本有着根本的不同。虽然与训练数据相比,网络安全中的对抗性样本通常具有不同的属性/分布,但计算机视觉中的对抗图像是在输入扰动较小的情况下创建的。这使得鲁棒学习技术的开发更加复杂,因为鲁棒学习技术必须能够承受不同类型的攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial machine learning for cybersecurity and computer vision: Current developments and challenges
We provide a comprehensive overview of adversarial machine learning focusing on two application domains, that is, cybersecurity and computer vision. Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques—they are vulnerable to carefully crafted attacks from malicious adversaries. For example, deep neural networks fail to correctly classify adversarial images, which are generated by adding imperceptible perturbations to clean images. We first discuss three main categories of attacks against machine learning techniques—poisoning attacks, evasion attacks, and privacy attacks. Then the corresponding defense approaches are introduced along with the weakness and limitations of the existing defense approaches. We notice adversarial samples in cybersecurity and computer vision are fundamentally different. While adversarial samples in cybersecurity often have different properties/distributions compared with training data, adversarial images in computer vision are created with minor input perturbations. This further complicates the development of robust learning techniques, because a robust learning technique must withstand different types of attacks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.20
自引率
0.00%
发文量
31
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信