Synonym-based Attack to Confuse Machine Learning Classifiers Using Black-box Setting

Aasher Khan, Suriya Rehman, Muhammad U. S. Khan, Mazhar Ali
{"title":"Synonym-based Attack to Confuse Machine Learning Classifiers Using Black-box Setting","authors":"Aasher Khan, Suriya Rehman, Muhammad U. S. Khan, Mazhar Ali","doi":"10.1109/ICEEST48626.2019.8981685","DOIUrl":null,"url":null,"abstract":"Twitter being the most popular content sharing platform is giving rise to automated accounts called “bots”. Majority of the users on Twitter are bots. Various machine learning (ML) algorithms are designed to detect bots avoiding the vulnerability constraints of ML-based models. This paper contributes to exploit vulnerabilities of machine learning (ML) algorithms through black-box attack. An adversarial text sequence misclassifies the results of deep learning (DL) classifiers for bot detection. Literature shows that ML models are vulnerable to attacks. The aim of this paper is to compromise the accuracy of ML-based bot detection algorithms by replacing original words in tweets with their synonyms. Our results show 7.2% decrease in the accuracy for bot tweets, therefore classifying bot tweets as legitimate tweets.","PeriodicalId":201513,"journal":{"name":"2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEEST48626.2019.8981685","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Twitter being the most popular content sharing platform is giving rise to automated accounts called “bots”. Majority of the users on Twitter are bots. Various machine learning (ML) algorithms are designed to detect bots avoiding the vulnerability constraints of ML-based models. This paper contributes to exploit vulnerabilities of machine learning (ML) algorithms through black-box attack. An adversarial text sequence misclassifies the results of deep learning (DL) classifiers for bot detection. Literature shows that ML models are vulnerable to attacks. The aim of this paper is to compromise the accuracy of ML-based bot detection algorithms by replacing original words in tweets with their synonyms. Our results show 7.2% decrease in the accuracy for bot tweets, therefore classifying bot tweets as legitimate tweets.
基于同义词的黑盒设置攻击混淆机器学习分类器
作为最受欢迎的内容分享平台,推特催生了被称为“机器人”的自动账户。Twitter上的大多数用户都是机器人。各种机器学习(ML)算法被设计用来检测机器人,以避免基于ML的模型的漏洞约束。本文致力于通过黑盒攻击来利用机器学习算法的漏洞。一个对抗性文本序列错误地分类了深度学习(DL)分类器用于机器人检测的结果。文献表明,ML模型容易受到攻击。本文的目的是通过用同义词替换推文中的原始单词来降低基于ml的机器人检测算法的准确性。我们的结果显示,机器人推文的准确性降低了7.2%,因此将机器人推文分类为合法推文。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信