{"title":"垃圾邮件过滤器的对抗性机器学习","authors":"Bhargav Kuchipudi, Ravi Teja Nannapaneni, Qi Liao","doi":"10.1145/3407023.3407079","DOIUrl":null,"url":null,"abstract":"Email spam filters based on machine learning techniques are widely deployed in today's organizations. As our society relies more on artificial intelligence (AI), the security of AI, especially the machine learning algorithms, becomes increasingly important and remains largely untested. Adversarial machine learning, on the other hand, attempts to defeat machine learning models through malicious input. In this paper, we experiment how adversarial scenario may impact the security of machine learning based mechanisms such as email spam filters. Using natural language processing (NLP) and Baysian model as an example, we developed and tested three invasive techniques, i.e., synonym replacement, ham word injection and spam word spacing. Our adversarial examples and results suggest that these techniques are effective in fooling the machine learning models. The study calls for more research on understanding and safeguarding machine learning based security mechanisms in the presence of adversaries.","PeriodicalId":121225,"journal":{"name":"Proceedings of the 15th International Conference on Availability, Reliability and Security","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":"{\"title\":\"Adversarial machine learning for spam filters\",\"authors\":\"Bhargav Kuchipudi, Ravi Teja Nannapaneni, Qi Liao\",\"doi\":\"10.1145/3407023.3407079\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Email spam filters based on machine learning techniques are widely deployed in today's organizations. As our society relies more on artificial intelligence (AI), the security of AI, especially the machine learning algorithms, becomes increasingly important and remains largely untested. Adversarial machine learning, on the other hand, attempts to defeat machine learning models through malicious input. In this paper, we experiment how adversarial scenario may impact the security of machine learning based mechanisms such as email spam filters. Using natural language processing (NLP) and Baysian model as an example, we developed and tested three invasive techniques, i.e., synonym replacement, ham word injection and spam word spacing. Our adversarial examples and results suggest that these techniques are effective in fooling the machine learning models. The study calls for more research on understanding and safeguarding machine learning based security mechanisms in the presence of adversaries.\",\"PeriodicalId\":121225,\"journal\":{\"name\":\"Proceedings of the 15th International Conference on Availability, Reliability and Security\",\"volume\":\"90 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"24\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 15th International Conference on Availability, Reliability and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3407023.3407079\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 15th International Conference on Availability, Reliability and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3407023.3407079","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Email spam filters based on machine learning techniques are widely deployed in today's organizations. As our society relies more on artificial intelligence (AI), the security of AI, especially the machine learning algorithms, becomes increasingly important and remains largely untested. Adversarial machine learning, on the other hand, attempts to defeat machine learning models through malicious input. In this paper, we experiment how adversarial scenario may impact the security of machine learning based mechanisms such as email spam filters. Using natural language processing (NLP) and Baysian model as an example, we developed and tested three invasive techniques, i.e., synonym replacement, ham word injection and spam word spacing. Our adversarial examples and results suggest that these techniques are effective in fooling the machine learning models. The study calls for more research on understanding and safeguarding machine learning based security mechanisms in the presence of adversaries.