Analysis on Data Poisoning Attack Detection Using Machine Learning Techniques and Artificial Intelligence

IF 0.6 4区 工程技术 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC
Emad Alsuwat
{"title":"Analysis on Data Poisoning Attack Detection Using Machine Learning Techniques and Artificial Intelligence","authors":"Emad Alsuwat","doi":"10.1166/jno.2023.3436","DOIUrl":null,"url":null,"abstract":"One of the primary challenges of artificial intelligence in modern computing is providing privacy and security against adversarial opponents. This survey study covers the most representative poisoning attacks against supervised ML models. The major purpose of this survey is to highlight\n the most essential facts on security vulnerabilities in context of ML classifiers. Data poisoning attacks entail tampering with data samples provided to method during training stage, which may lead to a drop in the correctness and accuracy during inference stage. This research gathers most\n significant insights as well as discoveries from most recent existing literature on this topic. Furthermore, this work discusses several defence strategies that promise to provide feasible detection as well as mitigation procedures, as well as extra robustness against malicious attacks.","PeriodicalId":16446,"journal":{"name":"Journal of Nanoelectronics and Optoelectronics","volume":" ","pages":""},"PeriodicalIF":0.6000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Nanoelectronics and Optoelectronics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1166/jno.2023.3436","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

One of the primary challenges of artificial intelligence in modern computing is providing privacy and security against adversarial opponents. This survey study covers the most representative poisoning attacks against supervised ML models. The major purpose of this survey is to highlight the most essential facts on security vulnerabilities in context of ML classifiers. Data poisoning attacks entail tampering with data samples provided to method during training stage, which may lead to a drop in the correctness and accuracy during inference stage. This research gathers most significant insights as well as discoveries from most recent existing literature on this topic. Furthermore, this work discusses several defence strategies that promise to provide feasible detection as well as mitigation procedures, as well as extra robustness against malicious attacks.
基于机器学习技术和人工智能的数据中毒攻击检测分析
人工智能在现代计算中的主要挑战之一是为对抗对手提供隐私和安全。这项调查研究涵盖了针对监督ML模型的最具代表性的中毒攻击。本调查的主要目的是强调ML分类器中安全漏洞的最基本事实。数据中毒攻击需要在训练阶段篡改提供给方法的数据样本,这可能导致推理阶段的正确性和准确性下降。这项研究收集了关于这一主题的最新现有文献中最重要的见解和发现。此外,这项工作讨论了几种防御策略,这些策略承诺提供可行的检测和缓解程序,以及对恶意攻击的额外鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Nanoelectronics and Optoelectronics
Journal of Nanoelectronics and Optoelectronics 工程技术-工程:电子与电气
自引率
16.70%
发文量
48
审稿时长
12.5 months
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信