{"title":"Analysis on Data Poisoning Attack Detection Using Machine Learning Techniques and Artificial Intelligence","authors":"Emad Alsuwat","doi":"10.1166/jno.2023.3436","DOIUrl":null,"url":null,"abstract":"One of the primary challenges of artificial intelligence in modern computing is providing privacy and security against adversarial opponents. This survey study covers the most representative poisoning attacks against supervised ML models. The major purpose of this survey is to highlight\n the most essential facts on security vulnerabilities in context of ML classifiers. Data poisoning attacks entail tampering with data samples provided to method during training stage, which may lead to a drop in the correctness and accuracy during inference stage. This research gathers most\n significant insights as well as discoveries from most recent existing literature on this topic. Furthermore, this work discusses several defence strategies that promise to provide feasible detection as well as mitigation procedures, as well as extra robustness against malicious attacks.","PeriodicalId":16446,"journal":{"name":"Journal of Nanoelectronics and Optoelectronics","volume":" ","pages":""},"PeriodicalIF":0.6000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Nanoelectronics and Optoelectronics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1166/jno.2023.3436","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
One of the primary challenges of artificial intelligence in modern computing is providing privacy and security against adversarial opponents. This survey study covers the most representative poisoning attacks against supervised ML models. The major purpose of this survey is to highlight
the most essential facts on security vulnerabilities in context of ML classifiers. Data poisoning attacks entail tampering with data samples provided to method during training stage, which may lead to a drop in the correctness and accuracy during inference stage. This research gathers most
significant insights as well as discoveries from most recent existing literature on this topic. Furthermore, this work discusses several defence strategies that promise to provide feasible detection as well as mitigation procedures, as well as extra robustness against malicious attacks.