{"title":"Defense Method Challenges Against Backdoor Attacks in Neural Networks","authors":"Samaneh Shamshiri, Insoo Sohn","doi":"10.1109/ICAIIC60209.2024.10463411","DOIUrl":null,"url":null,"abstract":"Open-source machine-learning models demon-strated promising performance in a wide range of applications. However, they have been proved to be fragile against backdoor attacks. Backdoor attack, as a cyber-threat, results in targeted or not-targeted mis-classification of the neural networks without effecting the accuracy of the benign data samples. This happens through inserting imperceptible malicious triggers to the small part of datasets to change the prediction of the model based on attacker desired results. Therefore, a big part of researches focused on improving the robustness of the neural networks using different kind of detection and mitigation algorithms. In this paper, we discussed the challenges of the defense methods against backdoor attacks in machine learning models. Furthermore, we explored three state-of-the art defense algorithms against BDs including DB-COVIDNet, fine-pruning, LPSF and delve into the evolving landscape of backdoor attacks and the inherent difficulties in developing robust defense mechanisms.","PeriodicalId":518256,"journal":{"name":"2024 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"20 6","pages":"396-400"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIIC60209.2024.10463411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Open-source machine-learning models demon-strated promising performance in a wide range of applications. However, they have been proved to be fragile against backdoor attacks. Backdoor attack, as a cyber-threat, results in targeted or not-targeted mis-classification of the neural networks without effecting the accuracy of the benign data samples. This happens through inserting imperceptible malicious triggers to the small part of datasets to change the prediction of the model based on attacker desired results. Therefore, a big part of researches focused on improving the robustness of the neural networks using different kind of detection and mitigation algorithms. In this paper, we discussed the challenges of the defense methods against backdoor attacks in machine learning models. Furthermore, we explored three state-of-the art defense algorithms against BDs including DB-COVIDNet, fine-pruning, LPSF and delve into the evolving landscape of backdoor attacks and the inherent difficulties in developing robust defense mechanisms.