The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality

L. Amsaleg, J. Bailey, Dominique Barbe, S. Erfani, M. Houle, Vinh Nguyen, Miloš Radovanović
{"title":"The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality","authors":"L. Amsaleg, J. Bailey, Dominique Barbe, S. Erfani, M. Houle, Vinh Nguyen, Miloš Radovanović","doi":"10.1109/WIFS.2017.8267651","DOIUrl":null,"url":null,"abstract":"Recent research has shown that machine learning systems, including state-of-the-art deep neural networks, are vulnerable to adversarial attacks. By adding to the input object an imperceptible amount of adversarial noise, it is highly likely that the classifier can be tricked into assigning the modified object to any desired class. It has also been observed that these adversarial samples generalize well across models. A complete understanding of the nature of adversarial samples has not yet emerged. Towards this goal, we present a novel theoretical result formally linking the adversarial vulnerability of learning to the intrinsic dimensionality of the data. In particular, our investigation establishes that as the local intrinsic dimensionality (LID) increases, 1-NN classifiers become increasingly prone to being subverted. We show that in expectation, a k-nearest neighbor of a test point can be transformed into its 1-nearest neighbor by adding an amount of noise that diminishes as the LID increases. We also provide an experimental validation of the impact of LID on adversarial perturbation for both synthetic and real data, and discuss the implications of our result for general classifiers.","PeriodicalId":305837,"journal":{"name":"2017 IEEE Workshop on Information Forensics and Security (WIFS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"57","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Workshop on Information Forensics and Security (WIFS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WIFS.2017.8267651","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 57

Abstract

Recent research has shown that machine learning systems, including state-of-the-art deep neural networks, are vulnerable to adversarial attacks. By adding to the input object an imperceptible amount of adversarial noise, it is highly likely that the classifier can be tricked into assigning the modified object to any desired class. It has also been observed that these adversarial samples generalize well across models. A complete understanding of the nature of adversarial samples has not yet emerged. Towards this goal, we present a novel theoretical result formally linking the adversarial vulnerability of learning to the intrinsic dimensionality of the data. In particular, our investigation establishes that as the local intrinsic dimensionality (LID) increases, 1-NN classifiers become increasingly prone to being subverted. We show that in expectation, a k-nearest neighbor of a test point can be transformed into its 1-nearest neighbor by adding an amount of noise that diminishes as the LID increases. We also provide an experimental validation of the impact of LID on adversarial perturbation for both synthetic and real data, and discuss the implications of our result for general classifiers.
学习对对抗性扰动的脆弱性随着内在维度的增加而增加
最近的研究表明,机器学习系统,包括最先进的深度神经网络,很容易受到对抗性攻击。通过向输入对象添加难以察觉的对抗噪声,很可能欺骗分类器将修改后的对象分配给任何期望的类。还观察到,这些对抗性样本可以很好地泛化到各个模型。对对抗性样本性质的完整理解尚未出现。为了实现这一目标,我们提出了一个新的理论结果,将学习的对抗性脆弱性与数据的内在维度正式联系起来。特别是,我们的研究表明,随着局部固有维数(LID)的增加,1-NN分类器越来越容易被颠覆。我们表明,在期望中,测试点的k近邻可以通过添加一定量的噪声(随着LID的增加而减少)转换为它的1近邻。我们还为合成和真实数据提供了LID对对抗性扰动影响的实验验证,并讨论了我们的结果对一般分类器的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信