Majdi Maabreh, Omar M. Darwish, Ola Karajeh, Yahya M. Tashtoush
{"title":"基于粒子群优化的投毒攻击深度学习模型研究","authors":"Majdi Maabreh, Omar M. Darwish, Ola Karajeh, Yahya M. Tashtoush","doi":"10.1109/ACIT57182.2022.9994126","DOIUrl":null,"url":null,"abstract":"Deep learning (DL) has demonstrated several successes in a variety of fields, particularly in the era of big data. The process of training a deep learning model entails selecting the ideal learning parameters such as the number of hidden layers and neurons. Particle Swarm Optimization (PSO) is one useful nature-inspired algorithm to set those two influential learning parameters. In this study, two different datasets are used to study and evaluate the particle swarm optimizer with deep learning on datasets of different concentrations of poisoning attack, where some adversarial samples were crafted by attackers to ruin the learning process. The results showed that particle swarm optimization could find settings for deep learning with the existence of poisoned data that maximizes the model accuracy on the unseen testing dataset, and could also offer better recommendations compared to those recommended on all benign samples. This may introduce a concern that optimizers might conceal the existence of data poisoning, which may lead to unreliable learning in the advanced stages of upgrading the model on updated datasets.","PeriodicalId":256713,"journal":{"name":"2022 International Arab Conference on Information Technology (ACIT)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"On developing deep learning models with particle swarm optimization in the presence of poisoning attacks\",\"authors\":\"Majdi Maabreh, Omar M. Darwish, Ola Karajeh, Yahya M. Tashtoush\",\"doi\":\"10.1109/ACIT57182.2022.9994126\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning (DL) has demonstrated several successes in a variety of fields, particularly in the era of big data. The process of training a deep learning model entails selecting the ideal learning parameters such as the number of hidden layers and neurons. Particle Swarm Optimization (PSO) is one useful nature-inspired algorithm to set those two influential learning parameters. In this study, two different datasets are used to study and evaluate the particle swarm optimizer with deep learning on datasets of different concentrations of poisoning attack, where some adversarial samples were crafted by attackers to ruin the learning process. The results showed that particle swarm optimization could find settings for deep learning with the existence of poisoned data that maximizes the model accuracy on the unseen testing dataset, and could also offer better recommendations compared to those recommended on all benign samples. This may introduce a concern that optimizers might conceal the existence of data poisoning, which may lead to unreliable learning in the advanced stages of upgrading the model on updated datasets.\",\"PeriodicalId\":256713,\"journal\":{\"name\":\"2022 International Arab Conference on Information Technology (ACIT)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Arab Conference on Information Technology (ACIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACIT57182.2022.9994126\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Arab Conference on Information Technology (ACIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACIT57182.2022.9994126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On developing deep learning models with particle swarm optimization in the presence of poisoning attacks
Deep learning (DL) has demonstrated several successes in a variety of fields, particularly in the era of big data. The process of training a deep learning model entails selecting the ideal learning parameters such as the number of hidden layers and neurons. Particle Swarm Optimization (PSO) is one useful nature-inspired algorithm to set those two influential learning parameters. In this study, two different datasets are used to study and evaluate the particle swarm optimizer with deep learning on datasets of different concentrations of poisoning attack, where some adversarial samples were crafted by attackers to ruin the learning process. The results showed that particle swarm optimization could find settings for deep learning with the existence of poisoned data that maximizes the model accuracy on the unseen testing dataset, and could also offer better recommendations compared to those recommended on all benign samples. This may introduce a concern that optimizers might conceal the existence of data poisoning, which may lead to unreliable learning in the advanced stages of upgrading the model on updated datasets.