Beta Poisoning Attacks Against Machine Learning Models: Extensions, Limitations and Defenses

Atakan Kara, Nursena Koprucu, M. E. Gursoy
{"title":"Beta Poisoning Attacks Against Machine Learning Models: Extensions, Limitations and Defenses","authors":"Atakan Kara, Nursena Koprucu, M. E. Gursoy","doi":"10.1109/TPS-ISA56441.2022.00031","DOIUrl":null,"url":null,"abstract":"The rise of machine learning (ML) has made ML models lucrative targets for adversarial attacks. One of these attacks is Beta Poisoning, which is a recently proposed training-time attack based on heuristic poisoning of the training dataset. While Beta Poisoning was shown to be effective against linear ML models, it was originally developed with a fixed Gaussian Kernel Density Estimator (KDE) for likelihood estimation, and its effectiveness against more advanced, non-linear ML models has not been explored. In this paper, we advance the state of the art in Beta Poisoning attacks by making three novel contributions. First, we extend the attack so that it can be executed with arbitrary KDEs and norm functions. We integrate Gaussian, Laplacian, Epanechnikov and Logistic KDEs with three norm functions, and show that the choice of KDE can significantly impact attack effectiveness, especially when attacking linear models. Second, we empirically show that Beta Poisoning attacks are ineffective against non-linear ML models (such as neural networks and multi-layer perceptrons), even with our extensions. Results imply that the effectiveness of the attack decreases as model non-linearity and complexity increase. Finally, our third contribution is the development of a discriminator-based defense against Beta Poisoning attacks. Results show that our defense strategy achieves 99% and 93% accuracy in identifying poisoning samples on MNIST and CIFAR-10 datasets, respectively.","PeriodicalId":427887,"journal":{"name":"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPS-ISA56441.2022.00031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The rise of machine learning (ML) has made ML models lucrative targets for adversarial attacks. One of these attacks is Beta Poisoning, which is a recently proposed training-time attack based on heuristic poisoning of the training dataset. While Beta Poisoning was shown to be effective against linear ML models, it was originally developed with a fixed Gaussian Kernel Density Estimator (KDE) for likelihood estimation, and its effectiveness against more advanced, non-linear ML models has not been explored. In this paper, we advance the state of the art in Beta Poisoning attacks by making three novel contributions. First, we extend the attack so that it can be executed with arbitrary KDEs and norm functions. We integrate Gaussian, Laplacian, Epanechnikov and Logistic KDEs with three norm functions, and show that the choice of KDE can significantly impact attack effectiveness, especially when attacking linear models. Second, we empirically show that Beta Poisoning attacks are ineffective against non-linear ML models (such as neural networks and multi-layer perceptrons), even with our extensions. Results imply that the effectiveness of the attack decreases as model non-linearity and complexity increase. Finally, our third contribution is the development of a discriminator-based defense against Beta Poisoning attacks. Results show that our defense strategy achieves 99% and 93% accuracy in identifying poisoning samples on MNIST and CIFAR-10 datasets, respectively.
针对机器学习模型的Beta中毒攻击:扩展、限制和防御
机器学习(ML)的兴起使ML模型成为对抗性攻击的有利可图的目标。其中一种攻击是Beta中毒,这是最近提出的一种基于训练数据集的启发式中毒的训练时间攻击。虽然Beta中毒被证明对线性ML模型有效,但它最初是用固定的高斯核密度估计器(KDE)开发的,用于似然估计,并且它对更高级的非线性ML模型的有效性尚未被探索。在本文中,我们通过做出三个新的贡献来推进Beta中毒攻击的技术状态。首先,我们扩展了攻击,以便可以使用任意kde和规范函数执行攻击。我们将高斯、拉普拉斯、Epanechnikov和Logistic KDE与三个范数函数集成,并表明KDE的选择可以显着影响攻击有效性,特别是在攻击线性模型时。其次,我们的经验表明,即使使用我们的扩展,Beta中毒攻击对非线性ML模型(如神经网络和多层感知器)也是无效的。结果表明,攻击的有效性随着模型非线性和复杂度的增加而降低。最后,我们的第三个贡献是开发了基于鉴别器的防御Beta中毒攻击的方法。结果表明,我们的防御策略在MNIST和CIFAR-10数据集上识别中毒样本的准确率分别达到99%和93%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信